top of page

Search Results

447 results found with an empty search

  • Beyond the Bot Ep.6: Cobot Capable Robots

    Steven and Tony for Beyond the Bot episode 6 In this episode of Beyond the Bot, hosts Tony DeHart and Steven King dive into the fascinating and fast-evolving world of collaborative robots, or cobot capable robots (cobots). From differences between cobots and traditional industrial robots to the latest advancements in AI, machine vision, and usability, Tony and Steven explore how cobots are reshaping industries. They also unpack how small and medium-sized businesses can adopt these technologies efficiently and affordably—while retaining and repurposing human talent. With insights from real-world use cases, this conversation is a must-listen for anyone curious about the future of automation, workplace collaboration, and robotic integration. Transcript Tony DeHart:  Hello, and welcome to another exciting episode of Beyond the Bot , where we bring you the latest in AI and robotics—and how to put it to work in your business today. I'm Tony. Steven King:  And I'm Steven. Tony:  We're here in the Blue Sky Lab, and today we're going to be talking to you about cobots. Now Steven, just for our listeners before we jump into it—can you give us a little bit of insight into what a cobot actually is, for folks who might not be familiar with the term? Steven:  Well, historically we've had industrial robots, which worked behind a fenced-in, protected area. They were very strong and required strict safety protocols. Cobots, on the other hand, are designed to work alongside people. The idea is that a human and a robot can collaborate, which allows for much more flexibility in the tasks we can tackle. Tony:  So why might some folks be interested in using a cobot instead of an industrial robot? What are some of the benefits and tradeoffs? Steven:  For one, they're less expensive. You can deploy them in more environments—offices, labs, small manufacturing lines—places where industrial robots typically aren't feasible. Cobots are safer and more accessible, which opens up a lot of opportunities for repetitive tasks in tabletop environments, for example. Plus, because they’re built with safety in mind, you don’t have to invest as heavily in safety cages or large protective infrastructure. Tony:  And with less safety equipment required, I imagine the deployment cost is also significantly lower? Steven:  Exactly. Even if the robot itself isn't cheaper, the total cost of the project usually is, since we don’t need as much hard automation infrastructure. We're not putting in as many fences or interlocks, which saves both time and money. Tony:  Without those hard automation pieces, do you get any ancillary benefits—like added flexibility? Steven:  Definitely. You can program a cobot to do one task today, switch modes, and have it do something else tomorrow. That ability to pivot makes cobots highly adaptable to shifting business needs. We call them missions—customized sequences the robot performs. It’s easy to switch between missions as needs change. Tony:  Cobots are clearly having a moment right now. Their capabilities are expanding rapidly. What's driving that? Steven:  We get to solve a wide range of problems across different industries. We're seeing cobots being used not just on assembly lines and in warehouses, but also in labs and offices. These robots now offer 0.1 millimeter repeatability—very precise work, which is ideal for tasks that are hard for human hands to do consistently. And that precision opens doors for things like small electronics assembly or lab automation. Plus, the cost has come down, which is making them accessible to more people. Tony:  And how does AI, along with sensor technology and cameras, fit into all of this? Steven:  Robots have been around since the 1960s. Traditionally, they moved from point A to point B, doing the same thing over and over. With AI—especially computer vision and machine learning—we now teach robots to identify and interact with objects. That adaptability reduces errors and simplifies programming. Instead of saying “go to coordinate X,” we now say “find the object”—and the robot figures out where it is, even if it’s moved slightly. Tony:  So if you're not locked into precise positioning, you can work more easily alongside humans. And humans, like me, don't always put things back in the exact same place. Steven:  Exactly. Think about a kitchen—no chef puts the spatula in the exact same place every time. Vision-enabled cobots can handle that variability. That’s what makes them so ideal for collaborative environments—they tolerate real-world messiness. Tony:  How does all this affect how we program and operate these robots? Steven:  At Blue Sky, we design interfaces that make cobots as easy to use as a power tool. We leverage AI and solid UI design to allow operators with basic training to run them. With just a bit more training, they can create new missions and customize tasks without needing a roboticist on staff. You don’t need to write code—we use drag-and-drop and intuitive workflows. Tony:  We've got industrial robots, cobots, and now humanoids entering the scene. How do you choose the right one? Steven:  It depends on your end goal. If you're doing highly variable tasks, a humanoid might be best. But if it's repetitive—like moving boxes from A to B—a cobot or industrial robot is more efficient. Humanoids are often overkill and less energy-efficient for those tasks. They’re built to do everything, but most jobs don’t need that. Instead, you want the right tool for the job. Tony:  Within the cobot space, there are tons of options. How do you decide which ones to focus on—and how much to spend? Steven:  We’ve tested many, and the landscape has shifted a lot in just the last few years. Universal Robots (UR) has long been a leader, but now there are great alternatives at a third of the cost. One we like is the UFactory xArm 6. It’s easy to work with, has a great SDK, and fits most of our needs—good payload, precision, and affordability. And it integrates easily with our existing platforms. Tony: So as a buyer, I’m looking at payload, SDK support, and maybe also service and support? Steven:  Exactly. We guide our clients through uptime requirements, number of shifts, and what kind of support they'll need. Many small businesses don’t have roboticists, so we ensure strong support options—both onsite and remote. We even offer remote diagnostics and mission updates. Tony:  It’s not just about the robot, right? What about all the other hardware and tools? Steven:  The cobot is just the base. You also need the right software and the right end-of-arm tooling. Sometimes that's suction; sometimes it’s a traditional claw or even a custom-built tool for specific tasks. We've 3D printed some that can pick up something as delicate as an egg without breaking it. Others are rigid and allow for tasks like pushing or spraying adhesives. Tony:  So who's the quarterback pulling all of this together? The robot manufacturer? The tooling provider? Steven:  Usually, it’s an integrator. Manufacturers provide the base robot, but most clients need help with customization—whether in software or end tooling. We support clients through the setup and give them tools to continue adapting on their own. And we always make sure they have the training to tweak and evolve their setups over time. Tony:  As we look to the future with more and more of these actually in the workforce being productive—we often talk about managers for people. But what’s the corollary for robots? Who oversees them? Steven: Yeah, it was interesting—we were talking to a client the other day about HR, Human Resources. And we joked that now we have RR, right? Robotic Resources. I don't know what we’re going to call it, but ultimately it’s people who have a basic level of training and can make sure the robots are doing what they’re supposed to be doing. That might mean just using a user interface—something as simple as a web browser—to edit, change, update, and make new missions. Other times, it might mean having someone with a screwdriver who can disconnect two cables, take out six screws, replace the arm, and ship it back. Tony:  Do you give them names? Put googly eyes on them? Steven:  Human-computer interaction is something that’s really important to us. We want to make sure that people feel comfortable with the robot, both in terms of safety and in how they interact with it. It’s a coworker—it’s a cobot. So yeah, we sometimes have given them names and even googly eyes. It just depends on the client and how creative they want to be. But we’re very clear that we want them to have real, specific names—not just numbers. And part of our work is making sure our clients can communicate clearly with their teams, so people understand what the cobot is actually going to do—not what they fear it might do. Tony:  So the future really is man plus  machine—not man versus  machine. Steven:  Exactly. In most cases, these cobots come in and work alongside humans. And then businesses repurpose those people to do different tasks. Almost all of our clients do that. Some have even opened new businesses or launched new opportunities because they could redeploy their people to more meaningful or growth-oriented work. Tony: Steven, it’s always exciting to catch up with you on the fast-changing world of robotics and AI. Thanks for joining us on Beyond the Bot. Steven:  Thanks for having me. Tony:  We’ll be back next week with another exciting topic!

  • Beyond the Bot Ep. 8: GenAI Legal & Ethical Implications with Marissa Pt. 2

    Tony and Marissa for Beyond the Bot Episode 8 In the second part of this Beyond the Bot  interview, hosts Tony and Steven are joined by Marissa Porto, the Knight Chair in Local News, to explore the rapidly evolving world of artificial intelligence. The conversation zeroes in on the intersection of AI with creativity, business ethics, environmental impact, and workforce transformation. Together, they unpack nuanced questions around generative AI (GenAI), the responsibilities of companies in retraining their workforce, and the moral and ethical implications of using AI tools for content creation. Whether you’re a tech-savvy entrepreneur, a creative professional, or just curious about the ethical trajectory of AI, this episode offers a rich and thought-provoking dive into what it means to innovate responsibly in a digital age. Transcript Tony DeHart:  Hello and welcome to another episode of Beyond the Bot, where we go beyond the headlines and explore the world of AI and robotics and what it means for you and your business. I'm Tony. Steven King:  And I'm Steven. Tony:  And we're here in the Blue Sky Lab and we're joined by Marissa, the Knight Chair for Local News and Sustainability at UNC Chapel Hill. Marissa, thank you so much for joining us. Marissa Porto:  Thank you for having me. Tony:  So before we jump into the topic here, can you tell us a little bit about who you are and what your relationship with the news and artificial intelligence is? Marissa:  Well, I've spent most of my career in newsrooms covering small communities around the country and leading newsrooms and then leading news businesses for companies in the United States. And here I've been for three years. I'm the Knight Chair in local news. I focus my time and attention on the intersection between journalism and sustainability innovation. And the last few years I've been studying AI and how it's changing the business model. Tony:  So AI is a huge topic in the realm of innovation and creating content, right? And, you know, we talk a lot internally about artificial intelligence as a driver of business value. But today we really want to focus on the creative applications of AI and what some of those might look like. So when we talk about AI art, what exactly are we talking about here? Marissa:  So we're talking about—it's a broad spectrum, right? It's everything from poetry to stories to videos to— Steven:  Music. Marissa:  Music. Great. Everything that is creative is AI art. And that is what we're looking at today and what we're using in our classroom to teach our students. Tony:  So when we look at generative AI, Steven, specifically on the business front, there are a lot of ways that we can use this, right? What are some of the applications that a business might be looking to accomplish with generative AI? Steven:  I think before I answer that question, I might want to say that there's an argument over: is generative or AI art really art? Is it the creative process? Does it make things? So how do we define art, essentially? But let's just assume we're going to call it art because it makes a visual image or it makes something that makes us think. And so there is business value to that. People can make a t-shirt, they can sell that t-shirt. And so now all of a sudden people are like, "Oh, I can make things really quickly." They don't have to have all that talent or skill that they needed before. And so now they're able to do things because they have an idea and they can use generative AI to generate that idea that they can then try to sell and make money with. Tony:  So Marissa, when we focus in on that application—if we are generating an image using artificial intelligence—we've kind of cut a creative person out of that equation in some ways. What are the ethical implications of that? Marissa:  Well, I mean, I think there are a lot of ethical implications of what we're doing with AI. Right? First, they're really twofold. The first is: what is AI using for you to be able to go in there and give it a prompt and have it spit something back at you? Is it copyrighted material? And is that copyrighted material being used with permission or not? In which case, they're undercutting the economic value of this content. Right? So that's the first issue. Then the second real issue is that, as you're developing something using AI, at what point does it become something more than generated AI—something that really has artistic value, that has human interaction in it? What's the point there at which it becomes a creative endeavor? Tony:  And so if we go back to our t-shirt example, for instance, what really is that point where it becomes a new creation and not something that anybody can just go and print that image? At what point is it actually copyrightable? Steven:  Well, I think from my perspective, it's one of those things that maybe even the moment that it's generated—now the courts can argue over this—but the moment it was generated was based on a concept I had. So I had an idea for a t-shirt, I really did, and I was basically like, "Our robots suck." Okay, that was the concept we were going with. And I kind of came up—I wanted it comic style. I wanted it to have like the big "pow" kind of icon about it. I crafted this thing till I got to the exact colors I wanted, and then I got it and I thought I had it right. And then I used it, I made it, and now someone else has copied the idea. Do I own the copyright on that? I don't know. I like to think that I do. But ultimately, I could have sold the t-shirt—I didn't, right? But if I did sell the t-shirt, then all of a sudden I'd be losing money on that. So I think the moment that I created the prompt, I created something that didn't exist before. So therefore, I should be able to have the copyright on that. But people like to argue over that. Marissa:  Right. I mean, this is a global issue. It isn't just in the United States. This conversation is happening around the world. And the issue becomes: how much creative work was put into the prompt? So the courts are still diving into this, but the Copyright Office at the Library of Congress said in January that if there's significant human creative input into the content, then it is possible it could be copyrighted. So as an example, if I prompt ChatGPT by saying that—(and you could fill in any number of those tools)—but if I used a prompt and said, "Generate an image of a dog on a skateboard," right? That prompt is just a prompt. But then if I say, maybe I want the dog to be—I like Collies, so a Collie. And I want it to have five puppies, and I want it to have a green beret, and I did some back and forth about what that dog needed to look like and what color it was. Now you're starting to get into beyond the first prompt—you're starting to use a tool with human input and expression. And that is where, with the Copyright Office decision in January, they decided that could be copyrighted. Now, who makes the decision and at what point? That's the question right now. Tony:  Well Marissa, I want to hone back in on one thing that you said a moment ago, which is that this question is twofold—not just can the output be copyrighted, but is it being influenced by inputs that might have been copyrighted? So if we go back to our t-shirt example: if I say I want a picture of a dog on a skateboard in, say, Studio Ghibli style or in the style of Salvador Dalí, does that change the copyright implications? And does it change the ethical implications of using that art? Marissa:  Yes. So The New York Times and some other news organizations are now suing Microsoft for this very reason. They're saying Microsoft used that content that is copyrighted by The New York Times and they allowed their tools to be trained by it. And therefore, anything that's being output that has a New York Times style to it or feels similar to a story really was used without permission. So what you see now is, on one side, organizations like The New York Times suing for that. And then on the other side, some organizations, news organizations and media organizations, actually finding a way to contract their content, whatever their content is, and have the AI organization, the company, give them money for the use of that for training purposes. So those are sort of the economic and legal things that are happening in the world today. Steven:  Because it's really complicated. Because I say, I want to make this in Salvador Dalí style. Then I had to have looked at a Salvador Dalí painting to do that. Now, if I were the artist and I copy his style, the courts have said that an artist doesn't own that style. But in the case of the AI part, they had to study and take in that without permission. In most cases it's happening. And so therefore it's like you made a derivative of something you probably shouldn't have had access to in the first place. And that's where it really gets complicated as to what this thing is and kind of who has access and rights to it. So if I do it in The New York Times style, does The New York Times now get a few pennies every time I want to make something in that style? I think the courts are going to have to figure that out. Marissa:  Right. And there's a term called fair use. And fair use is a legal term. And it essentially says, if I'm taking some information and I'm transforming it in some way—so let's say I read something or see something and I decide to use it and transform it—this is outside of AI—I transform it into, let's say, a column. Right? I read something in The New York Times. I thought, oh, this is really interesting. I use some of the information, not word for word, but for a column, a writing that's either pro or con. That's a transformative use. Right? So that's called fair use of that content. And companies are arguing—the AI companies are arguing—well, letting us train our AI bots on content, that's a fair use. And so that's really what the courts are going to have to figure out right now. Tony:  Well, and notable to that example, it's attributed. Right? In that case, you're saying this is information that I got from a New York Times article. But that's not always the case with AI. So for example, Steven, from a business perspective, as a person who comes up with a lot of creative solutions, how would you feel if an AI chatbot was able to parse those solutions and serve them to people without your will or knowledge? Steven:  Yeah. I mean, it's like, you know, we come up with a solution. We share that with a client. The client then took that and built it on their own. That's really frustrating. Okay. The same thing is happening in AI every day. But it's a collective and you may not be aware of it. Right. And so as a business owner and as we're trying to figure out the future of this, I think business owners are going to have to decide how much do I share publicly? Is there going to be some way of me saying, no, this content is not available to AI bots, for example? Is this something that I want to have some way of protecting? We don't have a good way to do that, but I think it'll be up to the people. Maybe the University of North Carolina, Hussman School should come up with that, right? Maybe there has to be some way that we do protections of these and give people the choice to opt in and opt out. Those types of things. Marissa:  And I mean, I would say there are businesses already that are building their own AI models. Right. We just had someone from Bloomberg, a UNC grad, speaking to my media economics class. And one of the things she said is that Bloomberg has a closed system. So it puts in its own system Bloomberg content and only allows Bloomberg content because it already knows that Bloomberg content has been vetted. And so you can't get into the system from outside, but inside the company, you can get into it. So you see those sort of closed systems developing now. Tony:  Now Marissa and Steven, there are a lot of things that we can do to protect copyrighted materials moving forward. But a lot of media companies have kind of made this argument that the toothpaste is out of the tube, so to speak. There are already massive libraries of open source materials that have been used to train these models. And so is this even a relevant question, or is there a way to go back? Or, you know, where do we go from here, given that that's, you know, sort of the bell's already been rung, so to speak? Marissa:  I think that is a... that is a challenging question. So and you have to look at it with the vantage point of the United States, and then you have to look at it from a global perspective. So here in the United States, we sort of have a little bit of the Wild West feeling about regulating business. And it- it's continued, right, this administration is very much anti-regulation for business. And so you see some of those guardrails coming down, for different types of businesses. But you also have, you know, the in the EU, there's a very significant, there's very significant guardrails around the use of AI and how the ethical uses of AI and how it will be rolled out, and when it's rolled out. All of those things the EU has, has built into its, its laws. And here, that we could... we could be affected by that based on business. So that's a whole conversation that, that I was having, a few weeks ago with some folks from a German university who were visiting here. How do you change the law? Marissa:  Is it- can you put the genie back in the bottle? And, and what would cause the states to actually consider, different legislation. And it seems like the conversation was that if business had to go into another country where there's different legislation, then that might prompt the United States to think about what that legislation should be here. Steven:  Yeah. This is, this is really a business thing, right? Like, as a human, I can't unsee something that I've seen, but as a trained model, as a piece of code that has been received and data has been collected, we can retrain things and no longer use that model. Right. So but that doesn't make it financially smart, right? So a company is going to fight all they can. It's a whole lot cheaper to pay lawyers to defend this than it will be to retrain and remove all that and just keep getting the value that they're expecting out of it. So I think if, if the courts decide it, yes, technically we can put the toothpaste back in the tube as you said. Right. Because we will just use a different tube, right. Like we'll have to do things differently. And so I think there's a way to do it, but financially it's not in the company's best interest, nor is it in the best interests of innovation. The question is innovation versus your rights, you know? Marissa:  And, and let's be honest, ethics right? I mean, how is this technology going to be used in an ethical way? I mean, right now, some of the issues that we have in AI, is it’s being used for deepfakes. And deepfakes are particularly challenging if you are, let's be frank, a female, because a lot of what's happening in that deepfake area is  sexualized content, for celebrities, particularly women. So, what are the ethics of not having AI guidelines, and the United States is right at the cusp of thinking about what to do about deepfakes. And I hope we do something useful for those sorts of guidelines. But those ethics are really important to think about. Even if the genie is out of the bottle. Tony:  And there are certainly examples like that where there is a clear, you know, wrong approach and a clear right approach. Right. But it does seem like even for businesses and creators and individuals that, you know, have the best intentions and want to do things in an ethical and legal way, you know, there maybe is some gray area where the right choice is not always as clear. Tony:  And, you know, Steven, from from your perspective as a business owner, that level of uncertainty is is famously bad for business, right? And so, you know, I guess my question to you is twofold. First of all, from a business perspective, how do you navigate that uncertain environment? And then from a regulation perspective, what can be done to remove that gray area and kind of provide some clear guidance for folks? Steven:  Yeah, I think we're going to have to see the courts decide. We're going to have to see precedent. Once we have precedent, then we can make policies and kind of move things forwards. And, and, and businesses will be able to know where they can operate. That's going to take time. So I think what you're gonna see is businesses are going to start and businesses are going to fail, businesses are going to get acquired and things are all going to happen as these things happen and technology is going to change faster than policy. And that always has happened, right? Throughout history, technology moves faster than policy. And so we have to figure this out, and hopefully we're driven by good ethical standards and we follow these things.

  • Beyond the Bot Ep. 9: How Computer Vision is Powering the Future of Robotics

    Tony and Bhargav for Beyond the Bot Episode 9 For Beyond the Bot this week, host Anthony DeHart sits down with Blue Sky Robotics' Computer Vision and Robotics Engineer Bhargav Bompalli inside the Blue Sky Lab to explore the cutting-edge world of computer vision and its transformative applications across industries. From basic image filters to advanced AI models like YOLO and generative adversarial networks (GANs), they unpack how machines are being trained to see, understand, and even simulate the world around them. The conversation offers a deep dive into how computer vision intersects with machine learning, reinforcement learning, and robotics, showcasing how synthetic data and simulation environments are revolutionizing everything from quality control to autonomous manufacturing. Whether you're a tech enthusiast, robotics developer, or industry leader, this episode offers powerful insights into how computer vision is rapidly redefining the way machines interact with our world. Transcript: Anthony DeHart:  Hello and welcome to another exciting episode of Beyond the Bot , where we break down the latest in AI and robotics and what it means for you and your business. I'm Tony. Bhargav Bompalli:  I'm Bhargav. Tony:  And we're in the Blue Sky Lab. [Music] Tony:  Bhargav, you're a computer vision engineer on a day-in, day-out basis, right? So there's really nobody better to discuss these topics than someone like yourself. Bhargav:  That's correct. Tony:  So can you tell us—what does a computer vision engineer actually do on a day-to-day basis? Bhargav:  A computer vision engineer builds computer vision algorithms. In a general sense, computer vision is a software technique for allowing computers to interpret, see, and understand the world around them. These techniques can range from simple mathematical formulas to complex AI or deep learning generative AI solutions. Tony:  We often talk about computer vision from the standpoint of artificial intelligence and machine learning, but it doesn't necessarily always have to be AI-based to be considered computer vision, right? Bhargav:  That's correct. While computer vision is typically associated with AI these days, even something as basic as applying a filter to an image or performing edge detection counts as computer vision. These tasks use simpler mathematical techniques. Tony:  Sometimes the term "computer vision" can be a little misleading. When we think about vision, we think about how our eyes perceive the world. For a computer, it's not always the same. How do we translate shapes, colors, and rich visual details into something a computer can understand? Bhargav:  We use sensors. Sensors are a huge part of computer vision. They allow computers to interpret the world. A very common sensor is a standard RGB camera, or a depth camera that combines RGB with depth-sensing technology. Tony:  When we take something like an RGB feed from a camera, how does the computer understand that? Is it analyzing the image as a whole or pixel by pixel? Bhargav:  It's actually pixel by pixel. Unlike humans, who can look at something and instantly recognize it, a computer breaks the image or video into pixels or groups of pixels and analyzes each pixel's color and value in relation to the ones around it. Tony:  So it's really a statistical and pattern recognition process? Bhargav:  Exactly. Tony:  We have these basic algorithms for image processing. How does AI and machine learning expand their capabilities? Bhargav:  AI takes it to the next level—it introduces autonomy. With AI, we don't have to hardcode all the statistical models. We can just tell the system what we want to detect. Sometimes it doesn't even need explicit instructions—it can learn on its own to identify things like cell phones or pets. Tony:  That's deep learning, right? Can you clarify how deep learning fits into the picture with machine learning? Bhargav:  Deep learning is a subset of machine learning. In computer vision, we often use convolutional neural networks (CNNs). A common one is YOLO—"You Only Look Once"—which does object recognition. It can detect objects like microphones or people in an image. Other models like ResNet or semantic/instance segmentation models go further and actually separate objects from the background. Tony:  If we take YOLO as an example, how does it learn what a microphone looks like? Bhargav:  It starts with us. We provide training data—images or videos of microphones in various settings. The model learns from those examples. Then, when it sees a new image, it uses that learning to recognize if a microphone is present and gives a probability. Tony:  So it’s similar to teaching a human a new skill—show it many examples and eventually it figures it out? Bhargav:  Exactly. At the pixel level, it’s recognizing patterns of color and shapes that make up a microphone. Tony:  You mentioned reinforcement learning earlier. How does that differ? Bhargav:  Reinforcement learning is more like how a baby learns to walk—through trial and error. The system is rewarded for correct actions and penalized for incorrect ones. So if it recognizes a microphone correctly, it gets rewarded. If it mistakes an iPad for a microphone, it gets penalized. Over time, it learns better. Tony:  So with YOLO, humans are labeling data upfront. With reinforcement learning, the human input is more about setting up the reward system and monitoring the outcome? Bhargav:  Exactly. The model starts from scratch and learns over time based on feedback. Tony:  Let’s go back to the hardware side. We talked about RGB sensors. But in robotics, you often need to know not just what something is, but where it is. How do we get that spatial information? Bhargav:  RGB gives 2D info. For 3D positioning, we use LIDAR or depth cameras. LIDAR uses laser light to determine object distances, creating a 3D map. This allows a robot to not just recognize an object, but also to locate and manipulate it. Tony:  So it’s not just what the object is, but where it is and what it should look like in context? Bhargav:  Right. You can use it for quality assurance, fault detection, and more. Tony:  You mentioned generative AI as an exciting development. Can you expand on that? Bhargav:  Generative AI, like DALL-E or GANs (Generative Adversarial Networks), can generate new training data. Instead of collecting real-world images, we can generate photorealistic simulated environments. That strengthens our models without requiring physical setups. Tony:  So machines can now create their own training data? Bhargav:  Exactly. That reduces human involvement and makes the whole training process more efficient. Tony:  What are the implications of synthetic data for industry? Bhargav:  It streamlines everything. Instead of physically placing a robot in an industrial environment, we simulate it. The robot can train in a photorealistic version of its real-world workspace. Tony:  Can you simulate different environmental conditions too? Like in a dusty, unpredictable factory versus a clean lab? Bhargav:  Yes. That’s a growing area. Simulation platforms allow us to tweak lighting, textures, colors, materials—everything. That helps improve model robustness and deployment readiness. Tony:  Let’s talk about real-world applications. We can imagine pick-and-place robotics or automated welding. What’s the next frontier? Bhargav:  The goal is to move beyond predefined patterns. Instead of telling the robot exactly what to do, we train it to be goal-oriented using reinforcement learning. Then it can handle unpredictable object placements on its own. Tony:  These are powerful tools. What safeguards do we need to ensure safety and fairness? Bhargav:  Human intervention is key. We need to ensure the training data is unbiased and fair. In reinforcement learning, we have to carefully design the reward and punishment systems to ensure ethical behavior. Tony:  Bhargav, this has been super interesting. Thank you for shining a light on this topic. Bhargav:  Thank you. Tony:  And thank you all for joining us. Stay tuned—we'll have another topic for you next week on Beyond the Bot .

  • Beyond the Bot Ep. 11 Live: Becoming “Future-Proof” in the Era of AI and Robotics

    Tony and Steven recording live for Beyond the Bot Episode 11 In this compelling episode of Beyond the Bot, hosts Tony DeHart and Steven King step away from the Blue Sky Lab and go live from 79 Degrees West. Their conversation, with the help of insightful audience participation, dives deep into the evolving landscape of AI and automation in both physical and digital spaces. They explore how the barriers to entry are dropping, how even small businesses can now leverage AI to drive efficiency and competitiveness, and why AI isn't necessarily taking jobs—but those who use AI certainly are. The episode is rich with real-world examples, including how AI is transforming traditional roles, empowering educators to rethink curriculums, and allowing companies to scale operations without scaling costs. Tony and Steven also touch on ethical concerns, from algorithmic bias to missteps by major corporations, and they offer thoughtful strategies for risk mitigation, from constitutional AI to better team knowledge sharing. With valuable audience insights and a look into the future of manufacturing robotics, this conversation offers essential context for understanding where we're headed next in the age of intelligent automation. Transcript: Tony DeHart:  So I'm Tony. Steven King:  And I'm Steven. And for the first time, we're coming to you not from the Blue Sky Lab but from 79 Degrees West. We are live streaming, and this podcast will be available on LinkedIn, YouTube, and all your favorite platforms. Tony DeHart:  Steven, as we're jumping into this—this is a topic that we talk about quite a bit. But we get the sense that it's more important now than ever before. Can you give us a little bit of backstory as to how the conversation is changing and why it's gaining importance? Steven King:  Yeah, well, AI has become a common word across all industries and even in our daily lives—from restaurants to dinner table conversations. It's everywhere. More people are finding new use cases every day, and it's becoming essential for businesses to automate in order to remain competitive. Tony DeHart:  What about adoption? We often hear, "AI is great for others, but maybe not for me." Are we seeing more widespread usage in the business community? Steven King:  Absolutely. Many are using basic tools like ChatGPT. Others are using integrated AI features in the tools they already subscribe to. Some might not even realize they’re using AI. Then there's a group fully embracing it, especially in physical automation—where the savings in labor and materials are significant. Smaller businesses are realizing they need to adopt these tools to compete with larger ones. Tony DeHart:  We often split this into two buckets: physical automation—like robotics in manufacturing—and digital automation. What does AI look like when it’s not robotic? Steven King:  You can automate almost anything. It doesn’t have to be flashy robotics. It could be spreadsheets, finances, email responses—the "back of the house" tasks that eat up time. With the right tools, companies that used to need ten people can now operate with two or three. Tony DeHart:  We’ve seen the barriers to entry drop significantly. On the physical side, components are cheaper. On the digital side, models are more powerful and affordable. Steven King:  Exactly. In 2017, a machine learning project cost us half a million dollars and three months. Today? You can get the same output for $20 a month. Tony DeHart:  That’s wild. And now we see that around 78% of large U.S. companies are regularly using AI. What’s the story with the Shopify CEO and their AI-first approach? Steven King:  I love stories like that. These companies experiment quietly, then go public with bold AI strategies. They test on a small scale, see value, then scale up. That efficiency leads to better margins, new markets, and growth. Tony DeHart:  So instead of asking "How can we use AI?" we’re asking "Why wouldn’t we start with AI?" Steven King:  Exactly. I’ve restructured my syllabus at UNCC. We’re skipping traditional coding and going straight to building with AI tools. Why learn to do something a tool already does better? Tony DeHart:  Show of hands from the live audience—how many of you use AI or automation regularly? [Audience raises hands] Tony DeHart:  That’s nearly every hand. Let’s hear some use cases. Who wants to share? Audience Member:  I’m on the board of a homeowners association. I’ve used AI to interpret engineering reports, forecast financial decisions, and even rephrase communications to the community based on real estate investment concerns. It’s like a soft skill AI coach. Steven King:  Great example. And tools like Crystal Knows provide personality profiling, suggesting ways to communicate effectively based on public data. It told me I take on too many responsibilities—which checks out. Tony DeHart:  They really do manipulate you with the right phrase, huh? Steven King:  "It’s not a sure thing, but..." works every time on me. Tony DeHart:  Another use case? Audience Member:  I work in industrial automation. We’re using inline instruments with machine learning to optimize processes in real-time, like detecting protein extraction potential in slurries. Tony DeHart:  Fantastic. And it’s not replacing jobs—it’s enhancing them. Steven, how do we see that playing out? Steven King:  It’s not AI taking your job. It’s someone using AI who will. Like how the power drill replaced the hand drill. It’s about efficiency and competition. Tony DeHart:  And your job might evolve. AI enables you to do more, not just faster. Even creative and technical roles are shifting. Steven King:  Right. AI tools are changing how we approach problem-solving and creativity. It’s like electricity—it didn’t give us better candles, it gave us light bulbs. Tony DeHart:  In our own office, we’ve created a chatbot using every technical manual and support question we've ever received. Makes us look like geniuses on calls. Steven King:  For clients, we’re using computer vision for quality control and anomaly detection. Immediate insight instead of end-of-line checks. Tony DeHart:  Of course, there are risks. Like the chatbot hack that led to $1 car offers from a major auto brand. Steven King:  Or Target’s predictive analysis mishap that accidentally revealed a teen’s pregnancy to her family. AI was right, but it wasn’t a good use of the data. Tony DeHart:  So how do we mitigate those risks? Steven King:  Train employees. Vet vendors thoroughly. Understand intellectual property risks, like what happened at Samsung with pasted code into ChatGPT. Tony DeHart:  What about data best practices? Steven King:  Keep humans in the loop, but design around potential human bias. Amazon’s diversity recruiting tool failed because it learned unintended biases. Even word choice mattered: "We did" versus "I did." Tony DeHart:  It’s not just about having one expert. Everyone needs to understand the tools. Steven King:  Right. It’s like a construction site—everyone needs to know how to use the tools, not just rely on the "drill guy." Tony DeHart:  Before we move to best practices, any AI gone wrong stories? Audience Member:  Can we use AI to see what biased the system in the first place? Steven King:  Great question. That’s where constitutional AI comes in. It’s a separate watchdog AI that flags changes violating preset principles. It doesn’t evolve with the same data as the main system. Audience Member:  Couldn’t that watchdog also develop biases? Steven King:  Potentially. But by isolating it from evolving data, we reduce that risk. It’s not perfect, but it’s one of the best guardrails we have right now. Audience Member:  How might generative AI impact robotics, especially with U.S. manufacturing trends? Steven King:  Love that question. With onshoring, automation is essential. We’re combining large visual models with language models so you can say, "Pick up the water bottle," and the robot knows what to do. We're not quite there yet, but we’re getting close. AI is making robots more intuitive and lowering error rates. Tony DeHart:  Amazing. It’s clear that we’re still early, but the light bulb moments are already here. And the future? It’s looking very bright indeed.

  • Beyond the Bot at Automate 2025: IGUS, RBTX, and the Future of Affordable Automation

    In this episode of Beyond the Bot, Tony speaks with Jacob from IGUS at Automate 2025. They explore how IGUS, traditionally known for plastic components, is revolutionizing the automation space with scalable, low-cost solutions and platforms like RBTX and Axis. From the challenges facing small-to-midsize businesses to the myth of the skills gap in manufacturing, Jacob offers an unfiltered view of where the industry is headed. For anyone exploring automation—whether you're just getting started or looking to optimize—this conversation delivers insight, inspiration, and actionable advice. Transcript: Tony:  Hello and welcome to another exciting episode of Beyond the Bot, where we bring you the latest in AI and robotics and how it can benefit your business. I'm Tony, and I'm here today—not in the Blue Sky Lab—but out at Automate 2025, where we're catching up with some of the most exciting folks in the industry to see what they have to offer. I'm joined now by Jacob from IGUS. Jacob, thank you so much for joining me. Jacob:  Yeah, this is fun! Tony:  So Jacob, before we dive into some of the nitty-gritty, can you tell me a little bit of the background of what IGUS is? I hear you guys are a plastics company, but I see a whole lot of robots around me. Jacob:  Yeah, hey, it's how we're advancing ourselves and advancing this industry. So yes, people do know IGUS as a plastics component manufacturer. We literally started all because our owner would go around to businesses in Germany asking, "Give me your toughest challenges." A company eventually did—it was an automotive customer—and he ended up designing this mushroom-shaped bearing. That bearing was the first thing IGUS ever invented. From there, we now have 18 different business units, each with their own subcategories. Just on the bearing side, we have 64 "flavors" of plastic that we manufacture in Germany—but that’s just the start. We have plastics for different temperatures, tensile strengths, pressures... We also moved into the flexible cable industry and cable management—that's where we're known for the black e-chain, the kind you see moving on a seventh axis or triplex robot. That black cabling? That’s us. Tony:  And now, as you're seeing, because we know where this industry is heading, we got into automation. We at Blue Sky Robotics use your seventh axis linear rail on a daily basis. You've been a great partner. Can you tell me a little bit about RBTX and what that service offering is? Jacob:  Yeah. RBTX is basically our automation marketplace. It came out of our LCA unit—Low Cost Automation. We started designing the Rebel, a gantry, and a delta robot using our components, to help people get started in automation. Because if people can't start, they can't grow. The purple in the RBTX brand came in when we saw that some customers needed solutions our robots couldn’t fulfill. That’s just fact. So, we started distributing other people's robots and products too, to still deliver a complete solution. We now have 500 different solutions under $18K, all visible on our website. You can watch customer videos, see exactly what products were used, and view transparent pricing. Tony:  You guys power a lot of different solutions across many industries. What are some barriers to adoption you're seeing in the marketplace? Jacob:  Many of our customers are under that $200,000 project benchmark. A lot of integrators won’t touch projects under 200K because of overhead. So, we focus on small to midsize businesses. One major barrier is the knowledge gap. These customers may not know much about automation—they just know they need it, or someone told them they need it. You're also dealing with legacy decisions—people who've used the wrong tech for 10-15 years. And you’ve got to carefully navigate that, helping them realize what’s actually right for their application without overselling them the latest widget just because it’s new. It's about delivering the right  solution. Tony:  It strikes me that there’s a dual knowledge gap: what’s possible, and how to execute. For folks who are curious about automation but not robotics experts, how do they find the right integration partner—an "automation ally," so to speak? Jacob:  Focus on companies that develop resources beyond just selling their own products. If you're going all-in with one manufacturer, then study everything they offer. But if a company is sharing broad industry knowledge and resources, even if you never buy from them, that shows they care about growing the industry—not just their sales. Tony:  What trends are you seeing in automation more broadly, or in specific industries? Jacob:  The cobot world just got a big shake-up. Previously, robots were labeled safe or collaborative based on certain features. Now, a robot isn’t safe unless the cell  is safe. That’s huge. Many companies leaned hard into branding as "cobot" manufacturers, but it’s always been about making the entire system safe for the customer. Tony:  How has reshoring affected IGUS? Jacob:  Huge impact. Six months ago, we unveiled 100 injection molding machines in Rhode Island—up from zero. We had no manufacturing in the U.S. before. Now, we can produce bearings, e-chains, cable management systems right here. We’re also moving our Dryspin tech to the States. This is maybe phase three of our long-term plan to grow U.S. manufacturing. Tony:  That’s just what you’re  doing. But it also helps your customers who are reshoring and need automation to support that. Jacob:  Exactly. Tony:  With all this change—especially in AI and robotics—how is IGUS helping folks get educated and started? Jacob:  We just launched the Axis Community, a collaborative automation hub. I want to provide more than products—I want to help people grow their careers and businesses. We've been developing Axis for six months and just launched it with partners like Kawasaki, Item, and Flex Line Automation. Axis is the place to go before  you hit Google. It includes the RBTX Academy—not just focused on IGUS. It's filled with partner content, business know-how, and industry basics. Like, what's an end-of-arm tool? What's a delta vs. gantry robot? These aren’t common knowledge yet. We’ve got 16 videos up now, a 24/7 chat, incentive programs, even training trips to Rhode Island. It’s like the best of Reddit, LinkedIn, and YouTube, all in one. Tony:  How do folks get signed up? Jacob:  Just go to axis-community.com. The homepage is public, and it’s all free. Once you make an account, you get full access. There are three phases: 1) everything free; 2) paid courses if you want deeper content; and 3) in-person trainings, which obviously cost more. Tony:  Amazing. Two last questions. First—among all these robots around us, what’s the one thing in this display that you're most excited about? Jacob:  I’m loving our cobot bench, especially the Dobot. It’s got capacitive sensing. Most cobots have to touch  you to stop. This one senses your hand from a distance and halts. That’s next-level safety. Tony:  Last one: what’s a myth in automation you’d like to debunk? Jacob:  The skills gap. I hate hearing about it. People say students don’t care about manufacturing—but I’ve been involved in FIRST Robotics for seven years, traveling to schools across the U.S. These kids do  care. They do  want this. The real issue is businesses aren’t paying attention to the talent in their own communities. If you’re not investing in them, they’ll go elsewhere—tech, startups, you name it. They’re ready. We just need to show up. Tony:  Jacob, we really appreciate your time. Thank you for all the work you and IGUS are doing. Looking forward to catching up again soon. Jacob:  Thanks, man!

  • How Spray Robots Handle Part Variability: Cutting Calibration Costs with Smarter Automation

    Spray robots have become a cornerstone of robotic finishing and surface finishing automation, offering consistency, efficiency, and safer working environments compared to manual painting. But one major obstacle continues to challenge manufacturers: part variability. Even the most advanced spray robot can struggle when parts differ slightly in shape, size, or orientation. These differences may seem minor, but they can drive up calibration costs, reduce uptime, and create quality issues. For manufacturers running mixed product lines — especially high-mix, low-volume production lines — variability can be the difference between a profitable automation project and one that never pays back. This article explores why part variability is such a costly challenge, and how vision systems and no-code automation software are helping companies overcome it — making technologies like Fairino robots more accessible and effective for industrial coating applications. The Problem of Part Variability in Robot Painting Spray robots are designed to follow precise paths. When every part looks the same and arrives in the same position, the system works flawlessly. But in reality, no production run is perfect. A part might be misaligned on a conveyor. A supplier batch may include slight variations in dimensions. Complex geometries make orientation difficult to maintain. For the robot, these small differences can cause big problems: Overspray or gaps in coverage. Uneven coating thickness. Adhesion or durability failures that lead to rework. In robot painting applications where quality standards are strict, variability often forces recalibration. That means downtime, additional labor, and higher operating costs — problems that multiply in high-mix, low-volume environments. Why Calibration Costs Add Up Calibration is not a one-time task. It’s a repeated cost driver. Initial setup:  Mapping spray paths, adjusting air pressure, and tuning flow rates for each new part or SKU takes time. Recalibration cycles:  New SKUs or minor part changes require repeated adjustments. Skilled labor:  Programmers and coating specialists are needed to set and maintain these parameters. ROI impact:  Long calibration cycles reduce throughput, increase costs, and make automation harder to justify for high-mix, low-volume operations. Solutions: Reducing the Pain of Variability 1. Vision Systems for Adaptive Robotic Finishing Part variability is often subtle — a few millimeters in placement or a minor shift in thickness. For a spray robot, though, this can mean wasted paint and inconsistent finishes. Vision systems use cameras and sensors to capture each part in real time. Instead of relying on a rigid program, the robot dynamically adjusts its spray path, angle, and distance. Key benefits include: Accurate spray coverage, even with inconsistent part placement. Less overspray and material waste. Improved surface quality, lowering the chance of defects or rework. In industries like automotive and electronics, where coating precision directly impacts durability and aesthetics, vision-enabled robotic paint booth setups can significantly reduce cost and downtime. 2. No-Code Automation Software Programming spray paths is one of the biggest contributors to calibration costs. Traditional systems require skilled operators to write or adjust code whenever a new part enters production. When part variability demands constant adjustments, this dependency drives up both costs and lead times. No-code automation software changes the equation. With simple, graphical interfaces, operators can adjust spray paths without writing code. The benefits: Shorter calibration cycles:  New spray patterns can be programmed in minutes. Reduced reliance on specialized labor:  Operators with minimal training can make adjustments. Faster changeovers:  Makes robot painting more practical for smaller runs or frequent part changes, especially in high-mix, low-volume production lines. Robotics manufacturers like Fairino supplies the robot hardware; the no-code control layer typically comes from an integrator or third-party platform. For example, integrators like Blue Sky Robotics deploy no-code interfaces on top of Fairino robots and pair them with explosion proof robots when hazardous environments require it. Key Takeaways for Manufacturers Part variability is one of the most expensive challenges in spray robot operations. The main cost drivers are calibration cycles, downtime, and reliance on skilled programmers. Manufacturers can mitigate these challenges with: Vision systems that adapt to part differences in real time. No-code software that simplifies path programming and changeovers. Flexible platforms like Fairino , including explosion-proof options for hazardous environments. Final Thoughts Spray robots remain one of the most effective tools in surface finishing automation, but part variability can erode their value if not addressed. The combination of adaptive vision systems and no-code automation software is helping manufacturers cut calibration costs, reduce rejects, and improve uptime. For companies struggling with variability in robot painting, investing in smarter tools and platforms is no longer optional — it’s the key to making robotic finishing reliable, safe, and profitable. Interested in learning more about the Autocoat System by Blue Sky Robotics? Reach out to our engineers today!

  • Automated Paint Systems: Overcoming Operational and Business Challenges in Industrial Coating

    Automated paint systems have become an essential tool in modern manufacturing. They promise consistent finishes, reduced waste, and improved efficiency compared to manual painting. But for many companies, the real struggle comes after installation. High setup times, demanding maintenance, regulatory hurdles, and integration issues can erode ROI if they aren’t addressed from the start. This article explores the most common operational and business challenges that come with adopting automated paint systems — and how manufacturers can overcome them with the right design, planning, and technology. Challenge 1: High Setup & Calibration Time One of the biggest frustrations for manufacturers is the time it takes to set up and calibrate a new system. Spray paths, air pressure, and flow rates all require careful tuning. For high-volume operations, the payoff is worth it. But for low-volume or custom jobs, this setup burden can eat into uptime and increase costs. How to address it: Use AI-driven programming tools that optimize spray paths automatically. Invest in modular setups that allow for faster changeovers. Work with vendors who provide strong onboarding and support for calibration cycles. Reducing setup time makes automated paint systems more practical not just for mass production, but for custom and small-batch work as well. Challenge 2: Maintenance Demands Automated spray systems rely on pumps, hoses, atomizers, and nozzles that are prone to clogging and wear. Regular cleaning is critical, but frequent maintenance reduces uptime and increases labor requirements. If ignored, it also leads to uneven coating quality and premature equipment failure. How to address it: Adopt predictive maintenance tools that monitor nozzle performance and detect wear before it causes downtime. Choose booth and sprayer designs that allow quick access for cleaning. Implement routine service schedules to minimize unplanned stoppages. Manufacturers that treat maintenance as a strategic process, not a reactive one, see much higher system availability. Challenge 3: Health & Safety Risks Even with automation, paint finishing presents safety hazards . Overspray, fumes, and combustible solvents remain serious risks. Compliance with OSHA, EPA, and ATEX regulations adds another layer of complexity, especially in global operations. How to address it: Deploy explosion proof robots in hazardous environments , particularly when working with solvent-based coatings or powder coating. Invest in smarter robotic paint booth designs that improve airflow, containment, and filtration. Use monitoring systems that verify ventilation and compliance in real time. Prioritizing safety not only reduces liability but also builds trust with workers and regulatory bodies. Challenge 4: Skilled Labor Shortages Automation reduces the need for manual painters, but skilled workers are still essential. Operators and programmers must understand coatings, equipment, and robotics. Unfortunately, these hybrid skills are difficult to recruit and retain. How to address it: Select automated paint systems with user-friendly interfaces for robotic paint sprayers. Partner with vendors that offer training programs and ongoing support. Cross-train existing staff to reduce reliance on external specialists. The right training strategy ensures automation enhances the workforce rather than creating new bottlenecks. Challenge 5: Integration with Existing Lines An automated paint system is rarely a standalone solution. It must fit into a broader production line that includes conveyors, curing ovens, and inspection processes. Poor integration often creates bottlenecks that slow the entire operation. How to address it: Plan system layouts that account for the full finishing process, not just the painting step. Work with integrators experienced in linking paint systems with other industrial coating processes. Choose scalable designs that can expand as production grows. Successful integration ensures that paint automation contributes to overall efficiency instead of creating new choke points. Turning Challenges Into ROI Each of these challenges has a solution — and when solved, they become competitive advantages: Faster setup → higher uptime and more flexibility. Proactive maintenance → fewer breakdowns and lower repair costs. Safer, compliant systems → reduced risk and easier expansion. Skilled teams supported by automation → consistent quality and throughput. Integrated workflows → smoother operations and greater ROI across the line. The key is to plan for these factors early, not after the system is installed. Final Thoughts Automated paint systems are more than just robotic paint sprayers in a booth. They are complex industrial coating solutions that require careful planning and support to deliver on their promise. High setup time, heavy maintenance, health and safety requirements, skilled labor needs, and integration challenges can all undermine results if left unchecked. By addressing these obstacles up front — with explosion proof robots, smarter booth designs, predictive maintenance, and proper training — manufacturers can ensure their investment pays off. In the end, the companies that succeed with paint automation are those that treat it as a holistic system, not just a single piece of equipment.

  • Automatic Spray Painting: How to Eliminate Overspray and Improve Coating Consistency

    Spray painting is one of the most common finishing processes in manufacturing, but it comes with two major challenges: overspray and inconsistent coatings. These issues drive up costs, increase waste, and put product quality at risk. That’s why more manufacturers are turning to automatic spray painting . By combining robotic precision with advanced sensors and controls, systems such as a robotic paint sprayer or multi-axis robot painter can deliver smooth, consistent finishes while cutting down on wasted paint. What Is Automatic Spray Painting? Automatic spray painting uses robots or programmable spray systems to apply paint evenly across surfaces. Unlike manual painting, which depends on an operator’s skill, automation ensures repeatable results every time. Typical components include: Spray robot or robotic paint sprayer  – mounted on a robotic arm for precision control. Control software  – programs spray paths, flow rates, and timing. Sensors and feedback loops  – monitor coating thickness, pressure, and spray angle. Spray booth & ventilation  – maintain safety and environmental compliance, especially in explosion proof robots designed for hazardous coatings. Industries that benefit most include: Automotive finishing lines. Aerospace and defense coating systems. Consumer electronics and appliance manufacturers. Furniture and wood product finishing. Powder coating applications where uniform coverage is critical. The Problem of Overspray in Painting Overspray happens when paint particles miss the target surface and settle elsewhere. Why it’s a problem: Material waste:  Up to 70% of paint may be lost. Cost impact:  Higher spend on coatings and solvents. Environmental compliance:  More VOC emissions to manage. Cleanliness:  Extra time and resources needed for booth cleaning. Even with manual operators using spray guns, overspray is almost unavoidable. Robots can reduce this issue significantly. How Automatic Spray Painting Reduces Overspray Automated systems address overspray by controlling every variable in the process: Precise path programming  – A robot painter follows exact spray patterns without deviation. Atomization control  – Nozzles regulate droplet size for optimal coverage. Flow rate monitoring  – Prevents excess paint release and maintains efficiency. Consistent distance & speed  – Robots keep the ideal angle and spray distance that humans can’t replicate for long periods. The result? Paint goes where it’s needed, with less waste and higher transfer efficiency — whether it’s liquid paint or powder coating. The Problem of Inconsistent Coating Thickness Manual painting often produces coatings that are too thick in some areas and too thin in others. This leads to: Defects like runs, orange peel, or poor surface finish. Increased rework and higher reject rates. Aesthetic issues that damage brand perception. Non-compliance with safety or durability standards. How Automatic Spray Painting Improves Coating Consistency Automatic spray systems ensure every cycle is identical, which dramatically improves coating quality. Closed-loop feedback  – Sensors measure thickness in real time and adjust spray accordingly. Adaptive spray paths  – Robots follow complex geometries with ease. Multi-axis coverage  – A robotic paint sprayer mounted on a flexible arm can reach angles human painters might miss. Repeatability  – Every part is coated to the same high standard, reducing variability. Additional Benefits of Automatic Spray Painting Beyond overspray reduction and consistency, automation brings added value: Improved health & safety  – Workers spend less time exposed to paint fumes and VOCs. Explosion proof robots  – Provide safety in environments with flammable coatings or solvents. Lower labor dependency  – Less reliance on skilled painters, who are increasingly hard to hire. Faster cycle times  – Robots maintain speed without fatigue. Regulatory compliance  – Easier to meet environmental standards with efficient systems. Key Considerations When Implementing Automatic Spray Painting Before investing, manufacturers should evaluate: Upfront costs vs. ROI  – Initial setup is significant, but long-term savings are real. Maintenance needs  – Nozzles and air systems must be kept clean. Part variability  – Vision systems or adaptive tooling may be required for flexible production. Integration  – Systems must align with conveyors, curing ovens, and quality inspection steps. Special coatings  – Some applications, like powder coating, require unique handling and equipment. Final Thoughts Overspray and inconsistent coatings have long been a pain point in finishing operations. With automatic spray painting, manufacturers can finally solve both problems while also reducing costs, improving safety, and boosting productivity. As technology advances, expect to see more spray robots and robot painters equipped with smart sensors, AI-driven controls, and explosion proof robot designs for hazardous environments. For companies seeking better quality and profitability in their coating operations, automation is no longer optional — it’s the future.

  • Fads vs. Trends in Robotics: What’s Here to Stay

    The robotics industry is evolving at a pace that can feel both exhilarating and overwhelming. Every year, new gadgets, smart machines, and automation tools hit the headlines, generating buzz across tech media and social platforms. But not all of these innovations are built to last. Distinguishing between a fleeting fad and a genuine trend is crucial for businesses, investors, and tech enthusiasts alike. In the rapidly changing world of robotics, understanding which developments have staying power can save resources, guide strategic decisions, and help companies stay ahead of the curve. What Are Fads in Robotics? A fad is a short-lived craze, an innovation that captures attention but lacks the depth or utility to sustain long-term adoption. Fads in robotics often rely heavily on hype, flashy marketing campaigns, or viral social media moments. While they can create momentary excitement, their impact on the industry is typically minimal, and they often fade as quickly as they appeared. Examples of fads in robotics include some early consumer-focused gadgets, like dancing robots or novelty robotic pets. These devices may capture public imagination initially but often fail to deliver meaningful functionality or long-term value. Fads share common traits: they spike in popularity almost overnight, attract media attention, and see sudden, intense demand, but adoption is usually narrow and short-lived. The risks of chasing fads are real. Businesses that invest heavily in these products may face wasted resources, unmet consumer expectations, and rapidly declining sales. Investors might see inflated valuations, only to watch them plummet when the initial excitement dies down. Essentially, fads thrive on novelty, not necessity. What Are Trends in Robotics? Unlike fads, trends represent long-term movements that fundamentally reshape industries and workflows. Trends are driven by innovation, technological advancement, and practical application, they don’t rely solely on hype. In robotics, trend s indicate a shift in how machines are integrated into daily operations and industrial processes . Examples of genuine trends in robotics include the rise of collaborative robots (cobots) in manufacturing , AI-driven automation systems, autonomous delivery robots, and advanced warehouse robotics . T hese innovations offer tangible benefits: improved efficiency, cost savings, higher accuracy, and safer working conditions. Trends are marked by steady growth, widespread adoption, and ongoing research and development. The advantages of embracing trends are significant. For businesses, they provide long-term ROI, operational improvements, and competitive advantage. For the robotics industry as a whole, trends help shape strategic priorities and guide innovation roadmaps. Key Differences Between Fads and Trends Understanding the distinctions between fads and trends is essential for making informed decisions in the robotics field. Some of the key differences include: Timeline:  Fads emerge quickly and fade just as fast. Trends grow steadily over time and have a lasting presence. Adoption:  Fads are often novelty-driven, appealing primarily to early adopters or hobbyists. Trends are utility-driven, addressing real needs and becoming embedded in workflows. Investment Appeal:  Fads may attract speculative investment due to excitement, whereas trends draw strategic investment based on measurable outcomes. Public Perception:  Fads are often seen as “cool” but impractical; trends are recognized for their practical impact and transformative potential. By keeping these differences in mind, businesses and investors can better allocate resources and make smarter decisions about which technologies to adopt or fund. How to Identify a Trend in Robotics Spotting a true trend requires careful observation and analysis. Indicators that a robotics development is a genuine trend include: Consistent media coverage:  Technologies that continually appear in industry publications, research reports, and professional discussions often indicate enduring interest. Growing patents and R&D investment:  A rising number of patents or funding commitments signal long-term strategic importance. Real-world deployments:  Trends are tested and implemented in real applications, whether in factories, warehouses, or healthcare facilities. Industry partnerships:  Collaborations between robotics companies and established industrial players indicate serious adoption potential. Conversely, red flags for fads include sudden viral hype without practical application, limited or non-existent ROI data, and a rapid drop in attention following initial excitement. Case Studies: Fads vs. Trends Fad Example: Humanoid Robots In the early 2000s and 2010s, humanoid robots like ASIMO  by Honda and other similar prototypes captured widespread attention. These robots were often showcased in demonstrations, trade shows, and media coverage, sparking excitement about the possibility of human-like robots entering everyday life. Despite the hype, most humanoid robots failed to gain practical adoption. They were expensive, complex to operate, and offered little functional value beyond demonstrations and experiments. Consumer and industrial interest remained limited, and many projects were scaled back or shifted toward research purposes rather than commercial deployment. Trend Example: Collaborative Robots in Manufacturing Collaborative robots , or co bots, exemplify a true trend in robotics. Over the past decade, they have steadily gained traction in factories and assembly lines. Unlike traditional industrial robots that required safety cages and specialized programming, cobots are designed to work alongside human workers safely and intuitively. Their adoption is driven by clear business needs, efficiency, flexibility, and safety, making them a sustainable trend rather than a passing craze. Implications for Stakeholders Understanding the difference between fads and trends has significant implications: Investors:  Spotting long-term trends allows investors to allocate resources strategically and avoid speculative pitfalls. Businesses:  Companies that adopt sustainable trends can improve operational efficiency, stay competitive, and innovate effectively. Consumers:  Recognizing realistic versus hype-driven products helps buyers make informed decisions, avoiding disappointment or wasted spending. By focusing on trends rather than fads, all stakeholders can better navigate the fast-paced robotics landscape. Looking Ahead: Emerging and Future Trends in Robotics As the robotics industry continues to evolve, several developments are poised to define the next decade. Emerging trends in industrial robotics include smarter automation s ystems, AI integration fo r predictive maintenance, autonomous logistics solutions, and human-robot collaboration in complex environments. Meanwhile, consumer robotics is also maturing, with smart home and healthcare applications gaining traction. Observing these patterns helps identify which technologies are likely to influence the robotics industry for years to come. In the fast-moving world of robotics, distinguishing between fads and trends is more than an academic exercise, it’s a strategic necessity. Fads can generate excitement but rarely provide long-term value, while trends represent sustained, practical innovation that transforms industries. By carefully analyzing adoption patterns, market signals, and real-world applications, stakeholders can navigate the robotics landscape more effectively and invest in technologies that are truly here to stay. Understanding the difference ensures that the next wave of robotic trends delivers both excitement and tangible impact. 👉 Want to learn more? Reach out to our engineering team today.

  • Robotic Sanding: Transforming Surface Finishing

    Surface finishing is a critical step in painting , manufacturing, woodworking, and metalworking. A smooth, precise finish ensures quality, enhances aesthetics, and prepares surfaces for painting, polishing, or coating . Traditionally, sanding has been a labor intensive, repetitive, and sometimes hazardous process. In recent years, robotic sanding has emerged as a transformative solution, combining precision, consistency, and safety to optimize production workflows. By integrating r obotics into sanding operations , manufacturers can achieve higher quality finishes, reduce material waste, and improve worker safety. Robotic systems can handle repetitive sanding tasks tirelessly while adapting to complex surfaces, making them a game changer across multiple industries. What Is Robotic Sanding? Robotic sanding  refers to the use of industrial robots equipped with sanding tools to automate surface finishing processes. Unlike manual sanding, robots provide consistent pressure, speed, and movement across all surfaces, eliminating human error and fatigue . A sanding robot can be programmed to handle flat panels, contoured surfaces, or complex shapes, making it ideal for furniture, automotive, and aerospace applications. These systems often integrate sensors to maintain proper force and alignment, ensuring that every pass delivers uniform results. Key Components of a Robotic Sanding System A typical robotic sander system includes several key components: Robot arms and motion systems: Provide precise, repeatable movements. Sanding tools and abrasives: Adaptable end-effectors for various materials and finishes. Sensors: Force, torque, and vision sensors ensure consistent pressure and detect surface irregularities. Control software: Programs and adjusts sanding paths for optimal efficiency. Some systems also integrate robot polishing tools, allowing the same robot to perform multi-step finishing processes without manual intervention. Benefits of Robotic Sanding Automating sanding processes brings numerous advantages: Consistent surface quality: robots maintain uniform pressure and motion. Increased productivity: systems can operate continuously, reducing cycle times. Enhanced worker safety:  minimizes exposure to dust, repetitive motion injuries, and hazards from sanding equipment. Reduced material waste : precise sanding limits over-removal of material. Flexibility: capable of sanding, polishing, and preparing surfaces for painting. By combining automatic sanding with robot polishing, manufacturers can streamline production and reduce manual labor costs. Applications Across Industries Robotic sanding is used across a variety of sectors: Woodworking and furniture manufacturing: sanding panels, edges, and curved surfaces efficiently. Automotive and aerospace parts finishing: preparing surfaces for painting, sanding, and polishing. Metal fabrication and sheet metal: smoothing edges, deburring, and refining surfaces. Composite materials and specialized surfaces: handling delicate or complex geometries with precision. Painting and polishing: in addition to sanding, robots can apply coatings and perform polishing tasks, ensuring high-quality, consistent results. Types of Robotic Sanding Processes Robotic systems can perform multiple sanding techniques: Belt sanding: ideal for flat surfaces or long edges. Orbital sanding: provides a swirl-free, smooth finish on panels. Flap sanding: conforms to irregular surfaces and contours. Dry vs. wet sanding: selected based on material type and finishing requirements. Surface contour sanding: adapts to complex geometries for uniform material removal. Technology Enhancements Modern robotic sanders use advanced technologies to improve performance: Force and torque sensors: adjust sanding pressure in real-time to prevent material damage. Vision systems: detect surface defects or uneven areas for adaptive sanding. IIoT integration: collects data for predictive maintenance, process optimization, and quality control. Adaptive programming: allows robots to automatically modify paths and sanding speed based on surface feedback. These enhancements make robots capable of performing multiple finishing operations, sanding, painting, and polishing, more efficiently than ever before. How Sensors Improve Safety and Precision Sensors in robotic sanding systems provide critical safety and quality benefits. Force and torque sensors prevent excessive pressure, reducing the risk of damaging materials or tools. Vision sensors detect obstacles or uneven surfaces, ensuring consistent sanding and protecting human operators in shared workspaces. Collaborative robots (cobots) equipped with these sensors can safely work alongside humans, expanding automation possibilities in smaller workshops or mixed-use environments. Overall, sensors make robotic sanding not only more precise but also safer for employees. Challenges and Considerations While robotic sanding offers many advantages, implementing a system comes with considerations: High upfront cost: robotic systems require investment in hardware, software, and training. Programming complexity: setting up sanding paths and processes takes time and expertise. Maintenance: sanding tools, sensors, and robots need regular upkeep for optimal performance. Surface variation: complex geometries or varying materials may require additional programming or adaptive technology. Despite these challenges, the long term gains in efficiency, consistency, and safety often outweigh initial costs. Future Trends in Robotic Sanding The future of robotic sanding is closely tied to advances in AI, sensors, and automation: Collaborative robots: enabling shared workspaces with humans safely. AI-driven sanding and polishing: adaptive systems that adjust to surface conditions in real-time. Smarter sensors: improved vision, force, and torque feedback for precision finishing. Integration with IIoT: real-time monitoring, predictive maintenance, and workflow optimization. Conclusion Robotic sanding is transforming surface finishing by combining precision, efficiency, and safety. From sanding panels to polishing and painting complex surfaces, robots provide consistent, high-quality results while reducing labor costs and workplace hazards. By adopting sanding robots and integrating robot polishing and automatic sanding technologies, manufacturers can stay competitive, improve product quality, and optimize operations. As robotics and sensor technology continue to advance, the future of surface finishing will be smarter, safer, and more efficient than ever. 👉 Reach out to our team toda y to see how robotic vision technology can enhance safety, precision, and efficiency in your operations.

  • Takt Time vs. Cycle Time

    In the fast moving world of manufacturing and operations, efficiency is everything. Companies that deliver products at the right pace while minimizing waste gain a major competitive advantage. But how do managers and teams measure whether they’re working at the right speed? Two of the most important metrics are takt time and cycle time. At first glance, these terms might seem interchangeable, but they play very different roles in process management. Understanding the nuances of takt time vs. cycle time can help organizations streamline production , balance workloads, and keep customer demand in focus. What Is Takt Time? The term “takt” comes from the German word for rhythm or beat, and that’s exactly what it represents in production: the rhythm of customer demand. A common question managers ask is “What’s the definition takt time ?” In simple terms, takt time is the maximum amount of time available to produce one unit in order to meet customer demand. The formula is straightforward: Takt Time = Available Production Time ÷ Customer Demand - Lean Enterprise Institute: Takt Time For example, if your factory operates 480 minutes in a day and customers require 240 units, the takt time is two minutes per unit. This means every two minutes, one product should roll off the line to stay in sync with demand. Why does this matter? Takt time prevents overproduction, one of the seven forms of waste identified in lean manufacturing. By aligning production speed with actual demand, companies avoid tying up resources in unnecessary inventory and keep workflows balanced. What Is Cycle Time? Cycle time is often confused with takt time, but it focuses on something different: the actual time it takes to complete a task, process, or produce one unit. Unlike takt time, which is demand-driven, cycle time is process-driven. Cycle time can be measured at different levels: Operator cycle time: how long it takes an employee to finish their portion of work. Machine cycle time: how long a machine requires to complete its operation. Total cycle time: the complete time from start to finish for a unit or process. For example, if it takes 90 seconds to assemble one product on the line, that’s the cycle time. Measuring this helps managers understand how efficient their current processes are and identify bottlenecks. Key Differences Between Takt Time and Cycle Time Although they sound similar, takt time and cycle time serve very different purposes. Driver: Takt time is based on customer demand; cycle time is based on actual production capability. Purpose: Takt time sets the pace of production; cycle time measures how well your process performs. Focus: Takt time looks outward (customer needs); cycle time looks inward (process efficiency). Imagine takt time as the beat of a metronome guiding a band, while cycle time is how quickly each musician can actually play their part. If the band doesn’t stay on beat, the music falls apart. A simple table makes the comparison clear: Aspect Takt Time Cycle Time Based on Customer demand Process execution Purpose Sets pace of production Measures actual performance Impact Prevents overproduction or underproduction Identifies bottlenecks and inefficiencies How Takt Time and Cycle Time Work Together The true value comes when both metrics are used together. If your cycle time is longer than takt time, it means your process cannot keep up with customer demand. Customers may face delays, and the system risks overloading. In this case, adjustments like adding labor, improving equipment, or streamlining steps are necessary. On the other hand, if your cycle time is shorter than takt time, your team is producing faster than demand requires. While this may seem positive, it can lead to overproduction, wasted inventory, and higher storage costs. The key is balance: aligning cycle time closely with takt time ensures steady, efficient, and demand driven output. Common Misconceptions Because the terms sound similar, there are a few frequent misconceptions worth clearing up: They’re the same thing. In reality, one sets the pace (takt) while the other measures actual performance (cycle). Faster cycle times are always better. Not if they’re far below takt time, it can lead to overproduction. Takt time never changes. Customer demand and production hours can fluctuate, which means takt time must be recalculated regularly. Practical Applications in Lean Manufacturing In lean manufacturing, both takt time and cycle time are essential tools. Here are some real-world ways businesses use them: Production planning: Setting takt time ensures production matches customer demand. Bottleneck analysis: Comparing cycle times across tasks highlights slow points in the workflow. Resource allocation: Aligning workforce and machine availability with takt time avoids both idle time and overburden. Continuous improvement: Tracking takt and cycle times supports Kaizen initiatives, helping teams identify small but impactful improvements. For example, a company might notice that while their takt time is three minutes, one step in production consistently takes five minutes. By automating that step or reassigning tasks, they can close the gap and get back on pace. Why Understanding Both Metrics Matters Efficiency isn’t just about working faster, it’s about working at the right pace. Takt time ensures production aligns with customer needs, while cycle time shows how effectively processes are running. Together, they provide a full picture of whether a company is on track to deliver products efficiently without creating waste. In the bigger picture, using takt time and cycle time correctly helps businesses: Meet customer expectations consistently. Reduce costs tied to overproduction or inefficiency. Improve worker satisfaction by balancing workloads. Build resilience to shifts in demand. Conclusion When it comes to takt time vs. cycle time, the distinction is more than academic, it’s a practical toolset for efficiency and customer satisfaction. Takt time provides the “beat” based on demand, while cycle time reveals the actual speed of your process. Companies that measure, monitor, and balance both are far better equipped to deliver on time, minimize waste , and continuously improve operations. For organizations embracing lean manufacturing these metrics are essential if you want to boost productivity without sacrificing quality. Start by calculating both and comparing them regularly. It’s one of the simplest yet most effective steps toward building a leaner, smarter, and more customer-focused operation. 👉 Contact our team today to explore how we can help you align takt time and cycle time in your operations for greater efficiency and productivity.

  • How Macrovey’s Mobile Warehouse System Exemplifies the Future of 3PL Automation

    In the ever-evolving world of third-party logistics (3PL), the pressure is on: faster fulfillment, more SKU variety, less labor, and tighter margins. E-commerce has pushed expectations sky-high, and traditional warehouse operations are often ill-equipped to handle the complexity. That’s why the Mobile Warehouse System developed by Macrovey in collaboration with Blue Sky Robotics is a breakthrough worth examining. This project rethinks what automation looks like in 3PL environments, offering a blueprint for how flexible, intelligent, and mobile systems can meet today’s operational demands. From its design and deployment to its robotic control and vision stack, the Mobile Warehouse System highlights how Blue Sky Robotics is helping integrators like Macrovey bring advanced automation to logistics environments that demand both adaptability and scalability. A New Take on Warehouse Automation: Built on Wheels The Mobile Warehouse System isn’t just a robotics cell, it’s a fully integrated fulfillment environment constructed on the bed of two 18-wheeler trailers. That’s right: a deployable, modular warehouse that can be relocated, reconfigured, and reimagined for a variety of use cases, from permanent fulfillment operations to rapid-deployment logistics hubs. Inside this mobile unit, a tightly coordinated network of robots work together to receive, store, retrieve, sort, and package goods for shipment. Macrovey designed the architecture and orchestrated system-level integration, while Blue Sky Robotics developed the two critical robotic workstations that make this solution tick: induction and kitting. How It Works: The Four-Step Workflow The Mobile Warehouse System runs through four core stages: 1. Induction: Vision-Guided Sorting At the entry point of the system, incoming items are introduced and sorted by a UFactory xArm 6 robot outfitted with Blue Sky Robotics’ vision system and motion control software. This induction station identifies the item, determines its destination, and places it into the appropriate bin. This process is entirely vision-guided and designed to accommodate variable packaging. Think bags of snacks, bottles of hand sanitizer, boxed items, and more. The ability to dynamically sort without hardcoded part locations allows the system to handle SKU diversity with ease. 2. Storage: Bin Transport and Shelving After items are sorted into bins, autonomous mobile robots (AMRs) take over. These AMRs transport the bins from the induction station and store them on an organized shelving system inside the trailer. These AMRs form the connective tissue of the system, shuttling items from induction to storage and later to the kitting station. 3. Kitting: Order Fulfillment with Dual xArms When an order is received, an AMR retrieves bins with the relevant items and delivers them to the kitting station, where two UFactory xArm 6 robots (again controlled by Blue Sky’s vision-guided platform) select the required items to fulfill the order. This dual-arm setup enables efficient parallel picking, allowing multiple orders to be prepared in tandem or single orders to be fulfilled with greater speed. The flexibility of the system allows for rapid SKU switching and minimal changeover time. 4. Packaging: Bag and Ship Once the kitting step is complete, the grouped items are passed through an automatic bagging machine and sealed for shipping. From there, they're either staged for final shipment or handed off to outbound logistics. This end-to-end pipeline from intake to outbound is managed in a footprint no larger than two semi-trailers, proving that automation doesn’t need a massive warehouse to make a massive impact. Why This System Matters for 3PL Providers Macrovey’s Mobile Warehouse System wasn’t built for show. It was built to solve real, recurring pain points in 3PL workflows. Here’s why it’s so valuable to the logistics industry: 1. Built for High-Mix Environments With so many different SKUs moving through modern fulfillment networks, automation must be adaptable. The Mobile Warehouse System handles variable product shapes, sizes, and packaging without needing rigid tooling or complex reconfiguration thanks to vision-based sorting and flexible robotic control. This makes it ideal for 3PL providers that handle small consumer goods, fast-moving inventory, and seasonal or short-run product lines. 2. Scalable and Modular by Design The system doesn’t require a massive warehouse footprint. It can be deployed where it's needed- on-site at a client facility, inside a hub-and-spoke network, or even as a pop-up fulfillment center during peak seasons. This makes it especially appealing to 3PL companies with variable workloads or multi-client operations. 3. Reduces Dependence on Manual Labor The automation of induction and kitting, two of the most repetitive and labor-intensive steps in fulfillment, significantly reduces physical strain and reliance on a large labor force. This is critical in an industry facing persistent labor shortages and high turnover. 4. Enhances Order Accuracy and Speed Vision-guided robotics don’t fatigue, and they don’t misplace items. The result is faster order processing and improved accuracy, even as product lines grow more complex. Blue Sky Robotics’ Role: Powering Induction and Kitting Macrovey’s vision for a mobile warehouse relied on tight coordination between multiple technologies, but much of the system’s intelligence lives inside its induction and kitting workstations. These are the most complex decision-making nodes in the pipeline, and they were built by Blue Sky Robotics. Here’s how we contributed: Custom Vision Software:  Our computer vision stack enables real-time identification and grasp planning across diverse item types. Robot-Agnostic Control Layer:  Though this system uses UFactory xArms, our architecture is designed to work across multiple brands, offering long-term flexibility and vendor freedom. Low-Code Operator Interface:  Warehouse staff can adjust parameters, onboard new SKUs, or override picks through an intuitive user interface with minimal training. System Responsiveness:  By minimizing latency in detection-to-action cycles, our control system enables quick and fluid robot motion, even with unpredictable item presentation. This isn’t just about programming robots, it’s about building intelligent, reconfigurable systems that integrate seamlessly into 3PL operations. A New Playbook for 3PL Automation Macrovey’s Mobile Warehouse System is a model for how 3PL companies can reimagine fulfillment: Replace static lines with dynamic cells Deploy automation in compact, mobile formats Use vision-guided robotics to handle SKU variety and changeovers Enable integrations with AMRs, WMS platforms, and bagging systems Scale automation gradually, with systems that are modular, not monolithic And with a partner like Blue Sky Robotics, these innovations don’t need to live in the distant future. They can be deployed now. The Takeaway: Flexible Automation, Delivered The Mobile Warehouse System solves the real-world challenges of 3PL: SKU diversity, labor limitations, and constant change. By marrying Macrovey’s integration expertise with Blue Sky Robotics’ flexible automation stack, this system proves that fulfillment automation doesn’t need to be complex, expensive, or static. Instead, it can be smart, agile, and deployable wherever your logistics operation needs it most. Let’s Build Your Next System Whether you’re a 3PL provider looking to automate key workflows, or an integrator seeking a technology partner with deep robotics experience, Blue Sky Robotics is here to help. We specialize in building flexible automation systems that work across platforms, evolve with your business, and deliver lasting value. Contact us today to discuss your vision, or to see how our robot-agnostic tools can bring it to life.

bottom of page