top of page

Search Results

447 results found with an empty search

  • 2D vs 3D Pictures in Robotics: Why the Difference Matters More Than You Think

    The difference between a 2D picture and a 3D picture sounds like a photography question. In robotics, it is an engineering constraint that determines what a robot arm can and cannot do. A 2D picture captures color, contrast, edges, and patterns in a flat plane. It tells the robot what something looks like. A 3D picture adds depth, the Z axis, producing a spatial map that tells the robot where something is, how far away it sits, how it is oriented, and what shape it has in three dimensions. That additional information is not a refinement. It is the difference between a robot that can locate and grasp objects in variable positions and a robot that cannot. Understanding this distinction clearly is the fastest way to avoid a common and expensive mistake: specifying a 2D vision system for an application that actually requires 3D, or spending money on a 3D system for a task where 2D is fully sufficient. What a 2D Picture Actually Contains A 2D image from an industrial camera is a grid of pixels. Each pixel has a color value, red, green, and blue intensity, and a brightness value. That is the complete information set. Width: yes. Height: yes. Depth: no. From this data, vision software can answer questions like: Is an object present in the frame? What color is it? Where does it appear in the image? Does it have a scratch or a label? What shape is its 2D silhouette? Is a barcode readable? These are genuinely useful questions for a wide range of manufacturing tasks. Presence detection, label verification, barcode reading, color classification, and surface inspection on flat parts in fixed orientations all fall within 2D capability. A 2D camera answers them quickly, cheaply, and reliably. What it cannot answer: How far away is the object? Is it tilted toward or away from the camera? If two objects appear to overlap in the image, which one is on top? What is the object's orientation in three-dimensional space? These questions require depth data that a 2D image simply does not contain. What a 3D Picture Actually Contains A 3D picture in industrial robotics is typically a point cloud: a dense collection of data points where each point has an X, Y, and Z coordinate. The Z coordinate is depth, the measured distance from the camera to that point on the object's surface. A structured light camera produces this data by projecting a known pattern of light onto the scene and measuring how the pattern deforms across object surfaces. The deformation encodes depth. The camera captures the deformed pattern and software reconstructs the 3D geometry from it. The result is a picture that looks like a wireframe or height map of the scene. Every visible surface is mapped in space. The vision software can then calculate an object's exact position and orientation in three dimensions, determine which of several overlapping objects is on top, measure surface features and dimensions, and calculate a grasp point and approach angle that accounts for the object's actual spatial position rather than just its appearance in a flat image. For a robot arm, the difference is fundamental. A 2D image tells the robot where an object appears to be in the camera frame. A 3D point cloud tells the robot where the object actually is in the physical world. The Practical Comparison The clearest way to see the difference is through specific tasks. Barcode scanning on a conveyor. A 2D camera is the right tool. The barcode is flat, the lighting is controlled, and no depth information is needed to read the code. Adding a 3D camera adds cost with no benefit. Picking a part from a randomly filled bin. A 3D camera is required. The parts are at different depths, different orientations, and partially occluding each other. A 2D image cannot determine which part is on top, how it is tilted, or what approach angle gives the robot a clean grasp. Without depth data, the robot misses picks or damages parts consistently. Verifying a label is correctly applied. 2D vision handles this. The label's presence, position, and readability are all visible in a flat image. Palletizing mixed case sizes. 3D vision is required. The camera needs to determine the dimensions and position of each case in three-dimensional space to plan a stable stack. A 2D image cannot provide that. Surface defect inspection on a flat machined part. 2D vision is sufficient for detecting scratches, discoloration, and cracks when the part arrives in a consistent, flat orientation. Assembly alignment where the target varies in position. 3D vision is required. The robot needs to know the exact 3D position of the mating feature to correct for positional variation and place the component accurately. The pattern is consistent: tasks that require knowing where something is in space need 3D. Tasks that require knowing what something looks like can often use 2D. Which Setup to Use with a Cobot The UFactory Lite 6  ($3,500) supports both 2D and 3D camera integration through UFactory's open-source vision SDK. For simple inspection and identification tasks, a 2D camera is a low-cost addition. For pick and place from variable positions, the SDK includes ready-to-run examples for the Intel RealSense D435 and Luxonis OAK-D-Pro-PoE stereo depth cameras. For production-grade 3D vision applications, the Fairino FR5  ($6,999) and Fairino FR10  ($10,199) support full ROS integration with industrial structured light cameras including the Mech-Eye series for demanding surfaces and precision applications. Getting Started Use our Cobot Selector  to match an arm and camera type to your application, or explore our automation software  to see how Blue Sky Robotics' computer vision tools support both imaging approaches. When you are ready to see a working cell, book a live demo . To learn more about computer vision software visit Blue Argus . Browse our full UFactory lineup  and Fairino cobots  with current pricing. FAQ What is the difference between a 2D and 3D picture in robotics? A 2D picture captures color and contrast in a flat plane, it shows what something looks like but not where it is in space. A 3D picture adds depth data, producing a spatial map with X, Y, and Z coordinates for every visible surface. For robot arms, 3D pictures are what enable grasping objects in variable positions and orientations. Can a robot use 2D pictures for pick and place? Yes, if parts always arrive in a consistent, known position and orientation. No, if parts vary in position, orientation, or stack in three dimensions. The latter requires a 3D picture to locate the pick target reliably. What does a 3D point cloud look like? A point cloud looks like a wireframe or height-colored map of the scene. Each dot in the cloud represents a location on a real surface, colored by height or depth. Dense point clouds from industrial structured light cameras produce detailed spatial representations of objects that vision software uses to calculate grasp points and robot trajectories.

  • Object Detection Camera for Robots: What It Is and How to Choose the Right One

    Every vision-guided robot cell starts with the same question: how does the robot know what it is looking at and where that object is? The answer is an object detection camera paired with the software that processes its output. Object detection in robotics is not a single technology. It is a capability built on top of a camera, a vision processing pipeline, and a set of algorithms that together allow the robot to find an object in the scene, identify what it is, determine its position and orientation, and act on that information. The camera is the starting point, but the camera alone does not detect anything. It captures data. Detection is what happens next. This post explains how object detection cameras work in robotic applications, which camera types are suited to which detection tasks, and how to connect the detection layer to a working robot cell. What Object Detection Actually Requires Object detection in a robotic context has three distinct requirements that the camera and software must satisfy together. Localization - The system needs to know where the object is in space, not just that it exists. For a robot arm to pick a part, it needs a 3D coordinate: X, Y, and Z position plus orientation. A camera that only confirms presence is not sufficient for manipulation tasks. Classification - In mixed-SKU environments or applications where multiple part types share the same workspace, the system needs to identify which object it is looking at, not just that something is there. Classification drives routing, grasp strategy selection, and downstream process decisions. Reliability across variability - Objects arrive in different positions, orientations, lighting conditions, and states of cleanliness. The detection system needs to perform consistently across that variability without requiring the environment to be rigidly controlled. These three requirements together determine which camera technology is appropriate and what the vision software needs to be capable of. Camera Types for Object Detection 2D cameras - Detect objects by their appearance in a flat image: shape, color, edges, and surface patterns. They are fast, inexpensive, and reliable for detection tasks where depth is not needed. Barcode reading, label detection, color sorting, and presence verification are all well-served by 2D cameras. The limitation is spatial: a 2D camera cannot tell the robot where an object is in three dimensions, which means it cannot reliably guide a robot arm to pick objects in variable positions or orientations. 3D depth cameras - Add the Z axis to object detection, giving the robot full spatial awareness. Stereo cameras use two offset lenses to calculate depth from image disparity. They are affordable, compact, and accurate enough for most robot guidance applications involving standard parts. Structured light cameras project a known pattern and measure its deformation to produce denser, more accurate point clouds, which is the better choice for reflective metals, dark materials, or geometrically complex parts where stereo cameras lose accuracy. AI-powered camera systems - Combine the depth camera hardware with on-board or edge-compute AI that handles object classification without requiring the operator to train a custom model for each new object type. This is the most significant recent shift in object detection for robotics. Traditional systems required labeled training data for every part the robot would encounter. Modern systems using large pre-trained vision models classify novel objects on day one without building a training dataset. Blue Sky Robotics' Blue Argus  platform is built on this approach. The operator describes the target object in natural language through the Python API. The system segments the camera image, classifies the target, calculates its 3D center point in robot coordinate space, and returns a pick coordinate ready to execute. No per-object training. No labeled dataset. No retraining when the product mix changes. Mounting the Object Detection Camera Where the camera is mounted affects what it can detect and how reliably it performs. Fixed overhead mount - The camera observes the workspace from a stationary position above the work area. This is the right configuration for bin picking, conveyor tracking, and palletizing where the camera needs a stable, wide-angle view of the full work zone. It is faster to deploy, easier to calibrate, and produces more consistent detection results cycle over cycle. Wrist mount (eye-in-hand) - The camera mounts on the robot's wrist and moves with the arm. This works well for inspection applications where the camera needs to approach objects closely from multiple angles. Blue Argus uses a universal wrist mount that positions the 3D depth camera at the end of the arm alongside the end effector, connected via PoE Ethernet with no separate power supply required. Connecting Object Detection to the Robot Arm The detection system produces a result. The robot arm acts on it. The connection between the two requires that pick coordinates be output in the robot's native coordinate frame, compatible with the motion controller or path planning framework the arm uses. Every arm in the Blue Sky Robotics lineup accepts external coordinate inputs through a Python SDK. The UFactory Lite 6  ($3,500) is the most accessible entry point for object detection-guided automation. The Fairino FR5  ($6,999) covers the widest range of production applications with 5 kg payload and full ROS support. For heavier parts or palletizing applications, the Fairino FR10  ($10,199) provides the payload and reach needed alongside the Blue Argus detection layer. Getting Started Request a Blue Argus demo  to see object detection running on your specific parts without model training. Use the Cobot Selector  to match an arm to your application. Browse our full UFactory lineup  and Fairino cobots  with current pricing, or book a live demo . To learn more about computer vision software visit Blue Argus . FAQ What is an object detection camera for robots? An object detection camera is a sensor used in robot automation cells to locate and identify objects in the robot's workspace. In combination with vision software, it tells the robot where objects are in 3D space, what they are, and how they are oriented so the robot can pick, inspect, or interact with them accurately. Do I need a 3D camera for object detection? For any application where the robot needs to pick objects in variable positions or orientations, yes. A 2D camera can detect whether an object is present and what it looks like, but cannot provide the 3D spatial coordinates a robot arm needs to locate and grasp it reliably. A 3D depth camera is required for manipulation tasks. Does object detection require training a custom model for each part? With traditional vision systems, yes. With AI-powered platforms like Blue Argus, no. Blue Argus uses pre-trained large vision models that detect and classify novel objects without per-SKU training, which makes it practical for operations with multiple part types or frequent product changes.

  • Repeatability vs Accuracy: The Spec That Actually Matters for Your Robot Cell

    If you have spent any time reading cobot datasheets, you have noticed that repeatability is always listed and accuracy is almost never mentioned. That is not an oversight. It is a deliberate reflection of which specification actually determines how well a robot performs in production. Most buyers assume accuracy is the important number. It sounds more rigorous. In practice, repeatability is the specification that determines whether your automation cell works reliably cycle after cycle, and accuracy in the traditional sense is nearly irrelevant for most cobot deployments. Understanding why requires understanding what each term actually measures and how robot arms are put to work in the real world. The Target Analogy The clearest way to understand the difference is through a simple analogy. Imagine a marksman firing at a target. If every shot lands in the same tight cluster, regardless of where that cluster falls on the target, the marksman is highly repeatable. If every shot lands close to the bullseye but scattered across a wide area, the marksman is accurate but not repeatable. If every shot lands in the same cluster right at the bullseye, they are both accurate and repeatable. For a robot arm in production, repeatability is the tight cluster. The arm returns to the same position, cycle after cycle, within a very small tolerance. Accuracy would mean that position is exactly where the robot was commanded to go in absolute space. Here is the critical insight: for most production automation, the absolute position does not matter. What matters is that the arm comes back to the same place every time. Why Repeatability Drives Production Performance Robot arms in manufacturing are almost never programmed by entering absolute coordinates. They are taught. An operator physically moves the arm to the correct position, records it, and the robot repeats that position on every subsequent cycle. When a robot is taught a pick position this way, any absolute accuracy error is eliminated at the teaching step. The robot learns the specific position it was shown, not a position calculated from coordinates. What determines whether the pick is reliable is whether the arm returns to that taught position consistently — which is repeatability. A robot with ±0.02 mm repeatability returns to within 20 micrometers of the taught position on every cycle. A robot with ±0.5 mm repeatability returns to within 500 micrometers. Both may have similar absolute accuracy in global space, but their production performance differs significantly on precision tasks. This is why every Blue Sky Robotics arm datasheet leads with repeatability. It is the number that predicts real-world performance. When Accuracy Actually Matters Accuracy becomes directly relevant in two specific scenarios. Offline programming - When robot paths are generated from CAD models or simulated environments rather than taught by demonstration, the robot needs to arrive at coordinates that were calculated rather than demonstrated. Here, the difference between where the robot is commanded to go and where it actually ends up matters directly. Poor absolute accuracy means the programmed paths do not match reality and require manual touchup after loading. Multi-robot coordination - When two or more robot arms need to work on the same part simultaneously, or hand off a part between cells, their absolute positions in shared space need to align. Repeatability within each arm is necessary but not sufficient — each arm also needs to be positioned accurately relative to the shared coordinate system. For the vast majority of cobot deployments in manufacturing and logistics, neither of these scenarios applies. Positions are taught, not calculated. Robots work independently rather than in tightly coordinated multi-arm cells. Repeatability is the relevant specification. Reading Blue Sky Robotics Arm Specifications Every arm in the Blue Sky Robotics lineup is specified with repeatability as the primary precision metric. The UFactory Lite 6  ($3,500) achieves ±0.1 mm repeatability, covering the majority of light-duty pick and place, machine tending, and basic inspection applications where sub-millimeter cycle-to-cycle consistency is required. The Fairino FR5  ($6,999) and Fairino FR10  ($10,199) deliver ±0.02 mm repeatability, which is 20 micrometers. That level of consistency is appropriate for precision assembly, tight-tolerance pick and place, and vision-guided inspection where the arm needs to arrive at vision-calculated coordinates with minimal mechanical error. The distinction between these specifications matters most when the application involves tight tolerances or when the vision system is calculating pick points rather than the operator teaching them. In vision-guided cells, the camera calculates a pick coordinate and the arm needs to execute it accurately. High repeatability ensures the arm arrives where the vision system sends it, not just where it was taught. For vision-guided cells, Blue Sky Robotics' Blue Argus  platform ships the camera, compute, and vision SDK as a pre-validated system. Because the hardware is tested together, the full pipeline from object detection to pick execution performs consistently rather than accumulating error across separately specified components. Getting Started Use our Cobot Selector  to match an arm to your precision requirements. Browse our full UFactory lineup  and Fairino cobots  with current pricing, or book a live demo  to discuss your specific tolerance requirements with our team. To learn more about computer vision software visit Blue Argus . FAQ Which is more important for a robot arm, repeatability or accuracy? For most cobot deployments where positions are taught by demonstration, repeatability is more important. It determines whether the arm returns to the same position consistently cycle after cycle. Accuracy in absolute space matters primarily for offline programming and multi-robot coordination scenarios. What does ±0.02 mm repeatability mean in practice? It means the robot arm returns to within 20 micrometers of the taught or calculated position on repeated cycles. For context, a human hair is approximately 70 micrometers in diameter. ±0.02 mm repeatability is appropriate for precision assembly, tight-tolerance inspection, and vision-guided applications where the arm must reliably execute coordinates calculated by a camera system. Why do cobot datasheets list repeatability but not accuracy? Because repeatability is what determines production performance for most cobot applications. When positions are taught by demonstration rather than programmed from absolute coordinates, absolute accuracy errors are absorbed at the teaching step. Repeatability determines whether the taught position is hit consistently, which is what actually matters in production.

  • Specular and Diffuse Reflection in Robot Vision: Why Surface Type Determines Camera Choice

    One of the most common reasons a robot vision cell performs well in testing and fails in production is surface type. The camera used during development was tested on matte plastic samples. The actual production parts are polished aluminum castings. The point cloud that looked clean on the demo parts looks like noise on the real ones. Understanding how light reflects from different surfaces is not academic detail for a robot vision application. It is practical engineering that determines whether the camera you select will produce usable data on the parts you actually need to pick. This post explains the three reflection behaviors that matter most in industrial camera robotics and how each one affects 3D vision system performance. Diffuse Reflection: The Easy Case Diffuse reflection occurs when light hits a rough or matte surface and scatters in many directions simultaneously. Because the reflected light distributes broadly across the hemisphere above the surface, the camera receives a consistent amount of light regardless of the angle at which it is looking at the object. The surface appears roughly the same brightness from any viewing direction. Most painted surfaces, cardboard, rubber, matte plastics, and rough metal castings produce diffuse reflection. These are the surfaces that industrial 3D cameras handle most easily. The consistent, predictable light return produces clean image data and reliable point cloud generation without special handling. For robot vision applications, diffuse surfaces are the baseline case. If all your parts are matte and non-reflective, almost any industrial depth camera will produce adequate point clouds. The challenge begins when parts have specular or mixed surface properties. Specular Reflection: The Problem Case Specular reflection occurs when light hits a smooth, polished surface and reflects at a specific angle rather than scattering broadly. The surface behaves like a mirror: light comes in at one angle and exits at the mirror angle on the other side of the surface normal. This creates two distinct problems for industrial cameras depending on whether the camera is positioned in the direct reflection path or outside it. Weak specular reflection - When the camera is positioned away from the direct reflection angle, it receives very little of the reflected light from a specular surface. The image data from that surface region is dark or missing. In a 3D point cloud, this appears as gaps, holes, or low-density regions on the specular surfaces of the part. The robot cannot plan a grasp on a surface it cannot see. Strong specular reflection - When the camera is positioned in or near the direct reflection angle, it receives the full intensity of the reflected light in a narrow cone. The image data is overexposed and saturated. The camera sensor clips the signal, producing white-out regions where depth information is lost. In a 3D point cloud, this appears as spikes, flat planes, or regions of missing data where the actual surface geometry should be. Polished metals, machined surfaces, chrome components, stainless steel, glass, and any part with a high-gloss finish produce specular reflection. These are among the most common materials in manufacturing, which is why surface type is so important to evaluate before selecting a camera for a robot vision application. The Multipath Effect: The Compound Problem The multipath effect is a third reflection challenge specific to structured light and similar active illumination camera systems. It occurs when projected light reflects off one surface and then reflects again off a second surface before reaching the camera, rather than traveling directly from the surface to the sensor. When this double-bounce occurs, the camera receives light that has traveled a longer path than expected. Because the system calculates depth by measuring the travel path of the projected pattern, the extra path length produces incorrect depth readings on those regions of the point cloud. The result is distorted geometry that does not match the actual surface. The multipath effect is most common in environments with multiple closely spaced reflective surfaces: metal bins with shiny walls, assemblies with adjacent polished components, refrigerator handles, automotive body panels, and similar geometries where a part's specular surface has another specular surface nearby. The bin picking scenario is particularly prone to multipath issues when metal parts are being picked from metal bins. Choosing a Camera for Your Surface Type The practical implication of these three reflection behaviors is that camera selection needs to account for the actual surfaces of the parts and the environment being scanned, not just the general capability of the sensor. For diffuse surfaces - Stereo depth cameras and standard structured light cameras perform well. These are the easiest surface conditions to work with and the widest range of camera technologies handles them reliably. For specular surfaces - Structured light cameras with HDR (high dynamic range) capture handle specular materials significantly better than standard cameras. HDR acquires multiple exposures simultaneously, capturing detail in both the underexposed shadow regions and the overexposed highlight regions in a single scan. This is the technology that makes industrial 3D cameras reliable on polished metals and glass that would defeat a standard depth camera. For multipath-prone environments - Camera positioning, optimized projection pattern design, and advanced signal processing that identifies and corrects multipath artifacts are the mitigation strategies. Cell design matters here: avoiding geometry where reflective surfaces directly face each other reduces multipath interference before any software correction is needed. What This Means for Your Robot Cell Blue Sky Robotics' Blue Argus  platform ships with a 3D depth camera selected to handle a broad range of real-world surface conditions. For operations handling polished or reflective metal parts where standard depth cameras produce unreliable data, our team can help evaluate whether the Blue Argus camera is appropriate for the specific surface conditions or whether a different sensor configuration is needed before deployment. The robot arms work regardless of camera type, the Fairino FR5  ($6,999), Fairino FR10  ($10,199), and full lineup accept pick coordinates from any camera system through open API integration. The camera and vision software layer is where the surface type question gets resolved. Getting Started Request a Blue Argus demo  to test the system on your specific parts and surface conditions. Use the Cobot Selector  to match an arm to your application. Browse our full Fairino lineup  and UFactory cobots  with current pricing, or book a live demo . To learn more about computer vision software visit Blue Argus . FAQ What is the difference between specular and diffuse reflection in machine vision? Diffuse reflection scatters light broadly from rough or matte surfaces, producing consistent image data from any viewing angle. Specular reflection reflects light at a specific mirror angle from smooth or polished surfaces, causing either underexposed dark regions or overexposed bright regions in the camera image depending on where the camera is positioned relative to the reflection angle. Why do reflective metal parts cause problems for 3D cameras? Polished metals produce specular reflection, which means the camera either receives too little light (resulting in gaps in the point cloud) or too much light (resulting in saturated, overexposed regions). Both conditions produce depth data that is unreliable or missing on the reflective surfaces of the part. HDR-capable structured light cameras mitigate this by capturing multiple exposure levels in a single scan. What is the multipath effect in 3D vision? The multipath effect occurs when projected light from an active illumination camera reflects off one surface onto a second surface before reaching the camera. The extra bounce extends the light path, causing the depth calculation to produce incorrect geometry on the affected regions. It is most common in environments with multiple closely spaced reflective surfaces, such as metal parts in metal bins.

  • Specular Reflection and Diffuse Reflection: A Practical Guide for Robot Vision

    If you have ever watched a robot vision demo go perfectly on test parts and then struggle on actual production parts, surface reflection is likely the reason. It is one of the most overlooked variables in robot vision cell design, and it is entirely predictable once you understand how different surfaces interact with light. This post takes a different angle than most technical explanations. Rather than walking through the physics from the ground up, it focuses on what specular and diffuse reflection mean for someone designing or evaluating a robot vision cell: what to look for on the shop floor, what symptoms to expect when reflection is causing problems, and what to do about it. The Core Distinction Every surface reflects light. The question is how. Diffuse reflection is what happens on rough, matte, or low-gloss surfaces. When light hits a matte surface, it scatters in many directions at once. The surface sends light back toward the camera from a broad range of angles, which means the camera receives a consistent, predictable signal regardless of exactly where it is positioned relative to the part. Cardboard boxes, painted metal, rubber, matte plastic, and rough castings all behave this way. The camera sees them reliably because there is always light coming back toward it. Specular reflection is what happens on smooth, polished, or high-gloss surfaces. When light hits a polished surface, it reflects at a specific angle — the mirror angle, opposite the angle of incidence. The surface concentrates reflected light in a narrow cone rather than scattering it broadly. Whether the camera sees that light depends entirely on whether it is positioned within that narrow reflection cone. That geometry is what creates the problems. What Specular Reflection Looks Like as a Problem In a robot vision cell, specular reflection from production parts shows up as one of two failure modes, and they look opposite from each other. Dark patches and missing data - When the camera is not in the path of the specular reflection, it receives almost no light from the polished surface. In a 2D image this appears as dark regions. In a 3D point cloud it appears as gaps, holes, or sparse data where the surface should be. The vision software cannot identify a grasp point on a surface it cannot see, and the robot either misses the pick or selects a suboptimal grasp on a different surface region. Overexposed blowout - When the camera is positioned directly in the reflection path, it receives the full concentrated intensity of the specular reflection. The sensor saturates. In a 2D image this appears as bright white regions with no detail. In a 3D point cloud it produces spikes, false planes, or distorted geometry where the actual surface shape should be represented. The resulting pick coordinates can be wildly incorrect even though the camera appears to be capturing the scene. The frustrating part of specular reflection problems is that they are inconsistent. The same camera and the same part produce different results depending on the part's orientation relative to the camera and light source. A part that scans well in one orientation fails in another. This looks like random system instability but is actually a predictable physics problem. How Part Geometry Makes It Worse Specular reflection problems compound when parts have multiple reflective surfaces that face each other or face the bin walls. This creates the multipath effect: light bounces from one surface to another before reaching the camera, traveling a longer path than the system expects. Because 3D structured light cameras calculate depth by measuring how light patterns deform across surfaces, extra-bounce light produces incorrect depth readings on affected regions. A polished metal part in a metal bin is a common case. Light from the camera hits the part, reflects to the bin wall, bounces back, and some of that secondary light reaches the camera alongside the primary reflection. The resulting point cloud has distorted geometry in the areas affected by the secondary bounce, which are usually the edges and lower surfaces of the part, exactly the regions where grasp points are often calculated. What to Do About It Three practical approaches address specular and diffuse reflection problems in robot vision cells. Match camera technology to surface type - Structured light cameras with high dynamic range (HDR) capture handle specular surfaces significantly better than standard cameras. HDR acquires multiple exposures in a single scan, capturing detail in both the dark underexposed regions and the bright overexposed regions that single-exposure systems cannot handle simultaneously. For cells handling polished metals, machined surfaces, or glass components, HDR capability is not optional. Adjust camera positioning - For specular surfaces, small changes in camera angle relative to the part can move the worst reflection artifacts from critical surface regions to less important ones. This is a low-cost mitigation that is worth evaluating before changing hardware. Control the environment - Eliminating nearby reflective surfaces that could cause multipath bounces — choosing plastic bins over metal bins, adding matte coatings to bin walls, or changing the orientation of specular parts relative to the camera — reduces multipath problems before any software correction is needed. Applying This to Your Cell Blue Sky Robotics' Blue Argus  platform is designed for real industrial environments, which means real industrial surface conditions. Testing Blue Argus on your actual production parts before deployment is the most reliable way to confirm whether the included camera handles your specific surface type or whether additional configuration is needed. All Blue Sky Robotics robot arms including the Fairino FR5  ($6,999) and Fairino FR10  ($10,199) accept pick coordinates from any camera system through open API integration, so camera configuration changes do not require changing the robot arm. Getting Started Request a Blue Argus demo  on your specific parts. Browse our full Fairino lineup  and UFactory cobots  with current pricing, or book a live demo . To learn more about computer vision software visit Blue Argus . FAQ What causes specular reflection problems in robot vision? Specular reflection problems occur when polished or smooth surfaces concentrate reflected light at a specific angle rather than scattering it broadly. Depending on camera position, this produces either dark gaps in the point cloud where no light reaches the camera, or overexposed blowout regions where the camera is flooded with light. Both conditions produce unreliable depth data. How is diffuse reflection different from specular reflection for industrial cameras? Diffuse reflection from matte surfaces scatters light broadly, giving the camera a consistent signal from any viewing angle. Specular reflection from polished surfaces concentrates light in a narrow cone, making the camera's received signal highly dependent on its precise position relative to the part. Diffuse surfaces are easy for cameras to handle; specular surfaces require specific camera technology or positioning to manage. What is the best way to handle reflective parts in a robot vision cell? The most reliable approach is to use a structured light camera with HDR capability, which captures usable data across both over- and underexposed regions in a single scan. Adjusting camera angle to move reflection artifacts away from critical surface regions and reducing multipath opportunities through cell design are complementary strategies that help regardless of camera type.

  • Automated Tray Unloading: How Robots Handle Plastic, Transparent, and Semitransparent Trays

    Automated tray unloading sounds like a straightforward depalletizing problem. The robot picks trays off a pallet and places them onto a conveyor or into a downstream process. Straightforward until the trays are plastic. Plastic trays, and particularly semitransparent or translucent plastic trays, are among the most difficult objects for standard 3D vision systems to handle reliably. They lack the surface features that help cameras locate and identify objects. They transmit and scatter light in ways that produce inconsistent or missing depth data. And they often arrive stacked with very little height difference between the top tray and the ones below it, making layer discrimination a precision requirement rather than a general sensing problem. This post explains why tray unloading is a challenging vision application, what makes plastic and semitransparent trays specifically difficult, and how to build a robot cell that handles them reliably. Why Trays Are Harder Than Cases Most automated depalletizing content focuses on cardboard cases. Cases are relatively cooperative for vision systems: they have matte surfaces that produce diffuse reflection, printed graphics and barcodes that provide surface features for cameras to lock onto, and substantial height differences between layers that make layer discrimination easy. Plastic trays behave differently in almost every relevant dimension. Surface characteristics - Plastic trays are often smooth, uniform in color, and low in surface texture. They provide few visual features for the camera to differentiate between the top surface of the top tray and the surface below it. Semitransparent trays are worse: they partially transmit light rather than reflecting it from the surface, which causes structured light cameras to generate inconsistent depth readings because the light pattern penetrates into the material rather than reflecting cleanly from the top surface. Stacking geometry - Trays are designed to stack efficiently, which means they nest together with very little vertical clearance between layers. The height difference between the top tray and the one below it may be only a few millimeters. This demands significantly higher Z-axis precision from the vision system than standard case depalletizing, where layer height differences are measured in centimeters. Edge geometry - Tray edges and rims are thin and often beveled. A camera generating a point cloud of a stacked tray set needs to resolve the rim of the top tray from the rim of the tray below it at close spacing. For standard depth cameras, this is at or beyond their practical resolution limit. What the Vision System Needs to Handle Reliable automated tray unloading requires a vision system that addresses these specific challenges rather than a general-purpose depalletizing solution. Accurate depth data on low-texture surfaces - The camera needs to produce reliable point clouds on smooth, featureless plastic surfaces where standard stereo cameras struggle for lack of surface disparity. Structured light cameras, which generate their own texture by projecting a pattern onto the surface, are significantly more robust on low-texture materials. They do not depend on the surface having features to match between images. Semitransparent material handling - For trays that transmit or scatter light, HDR exposure control and advanced signal processing help recover depth data from surfaces that would defeat standard single-exposure systems. The goal is to capture the top surface reflection accurately despite the material's partial transparency. Precise layer discrimination - The vision system needs to identify the top tray with enough Z-axis accuracy to confirm it is the topmost layer and calculate a grasp point that does not contact the tray below. For tightly nested trays, this requires sub-millimeter depth precision in the layer separation measurement. Consistent grasp point selection - Tray rims and lips are the natural grasp targets for most end-of-arm tools. The vision system needs to locate the rim geometry reliably and calculate approach angles that allow the gripper to engage the rim without sliding down into the nested stack. End-of-Arm Tooling Considerations Camera performance is only part of the tray unloading challenge. The end effector needs to be designed for the tray geometry. Vacuum grippers are the most common tool for plastic tray unloading. They work well on the flat base surface of the tray and do not require precise alignment to a specific feature. The challenge is that the base surface of a stacked tray is resting on the tray below it, making it inaccessible until the top tray is lifted. This means vacuum grippers need to engage from the rim or side wall rather than the base, which requires the gripper geometry to match the tray's rim profile. Custom gripper configurations that clamp the rim from both sides, or suction cups positioned to engage the tray wall at a specific height, are the most reliable solutions for nested plastic trays. The gripper design is often as consequential as the camera selection for reliable production performance. Which Arms Handle Tray Unloading Tray unloading payload requirements depend on tray size, material, and whether trays are picked individually or in stacks. Most plastic trays used in food processing, electronics, and consumer goods assembly fall well within the 5 to 10 kg range for individual picks. The Fairino FR5  ($6,999) covers light tray unloading applications where individual tray weight stays under 5 kg. Its 924 mm reach and 6-axis flexibility allow it to approach tray rims from the angles that vision-calculated grasp points require. For heavier trays or stacked picks where combined weight exceeds 5 kg, the Fairino FR10  ($10,199) provides 10 kg of payload capacity alongside the reach needed to cover a full pallet footprint from a fixed mount. For cells integrating a vision platform, Blue Sky Robotics' Blue Argus  platform provides a 3D depth camera, compute unit, and vision SDK as a pre-validated kit compatible with both arms through Python SDK integration. Getting Started Use our Cobot Selector  to match an arm to your tray handling requirements. Browse our full Fairino lineup  and UFactory cobots  with current pricing. Request a Blue Argus demo  to test vision performance on your specific tray type, or book a live demo  to discuss your full cell design. To learn more about computer vision software visit Blue Argus . FAQ Why are plastic trays difficult for robot vision systems? Plastic trays present three challenges: smooth, low-texture surfaces that provide few features for cameras to lock onto; semitransparent materials that transmit rather than reflect light, causing inconsistent depth readings; and tight nesting geometry with very small height differences between stacked layers that demand high Z-axis precision from the vision system. What type of camera works best for semitransparent tray unloading? Structured light cameras with HDR exposure control handle semitransparent and low-texture surfaces more reliably than standard stereo cameras. Structured light cameras project their own pattern onto the surface rather than relying on surface features, which is what makes them effective on materials that defeat feature-matching approaches. What end effector works best for plastic tray unloading? Vacuum grippers engaging the tray wall or rim are the most common solution for nested plastic trays where the base surface is inaccessible. Custom gripper geometry matched to the specific tray rim profile produces the most reliable pick results. The gripper design is often as important as the camera selection for consistent tray unloading performance.

  • Vision Guided Robot: How It Works and Where It Makes the Biggest Impact

    Mention vision guided robots to a plant manager dealing with inconsistent product placement or frequent SKU changeovers and the reaction is usually the same: interest followed immediately by skepticism. The technology sounds compelling in theory, but the assumption has long been that vision-guided automation is expensive, fragile, and built for high-volume operations with dedicated integration teams. That assumption is changing. The cameras, software, and robot arms that make up a vision guided robot system have become significantly more capable and more affordable. A small to mid-size manufacturer or distributor can now deploy a vision-guided cell for pick and place, palletizing, inspection, or finishing at a cost that makes the business case straightforward. Fairino cobots start at $6,999. A complete vision-guided cell is well within reach for operations that previously assumed automation was out of their budget. This post covers how a vision guided robot works, what makes vision guidance different from fixed automation, and which arms Blue Sky Robotics recommends for the job. What Vision Guided Robots Actually Are A vision guided robot is a robotic arm paired with one or more cameras and software that interprets visual input and uses it to direct the robot's motion. Rather than following pre-programmed coordinates, the robot reads the scene in real time, identifies what it is looking at, determines the position and orientation of the target, and calculates the correct path of movement. This is a significant departure from traditional fixed automation. Legacy systems require parts and products to arrive in precisely the same location and orientation every time. Change a box size, swap a SKU, or introduce any variability in how items are presented, and the system needs to be reprogrammed. A vision guided robot, by contrast, can locate an object wherever it lands on a conveyor, identify a case regardless of how it is rotated, or detect a surface defect without being told exactly where to look. The vision system itself typically consists of a 2D or 3D camera, a lighting setup suited to the environment, and software that processes the image feed and translates it into positional data the robot controller can act on. In 3D applications, the camera produces a point cloud of the scene that gives the robot precise spatial information about depth and shape, not just a flat image. Why Vision Guidance Changes the Equation The practical value of a vision guided robot comes down to flexibility. Fixed automation makes sense when you are running one product, one packaging format, and one pallet pattern at high volume indefinitely. Most operations are not that simple. Mixed SKUs, frequent changeovers, variable case sizes, and inconsistent product presentation are the norm for small and mid-size manufacturers and distributors. Vision guidance is what allows a single robot to handle all of it without requiring an integrator every time something changes. A few specific advantages stand out. No reprogramming for product changes - When a new SKU comes down the line, the vision software adapts. Operators interact with a graphical interface rather than rewriting robot paths. The system identifies the new item and adjusts its behavior accordingly. Reliable recognition of difficult objects - 3D cameras produce detailed point clouds that allow the robot to distinguish between tightly packed cases, identify the top layer of a mixed load, pick from a disorganized bin, or detect surface anomalies that would be invisible to a fixed sensor. Consistent performance across shifts - A vision guided robot does not get fatigued, distracted, or injured. It applies the same standard of precision at hour one and hour ten. For inspection tasks especially, that consistency translates directly into better quality output. Where Vision Guided Robots Deliver the Most Value Vision guidance is not a single use case. It is a capability that improves performance across a wide range of applications. Pick and place - This is the most common starting point. Vision allows the robot to locate and pick items from unstructured environments: a moving conveyor, a bin of mixed parts, or a tote with no defined product placement. Blue Sky Robotics works with operations across logistics, food production, and manufacturing on pick and place cells that handle exactly this kind of variability. Palletizing and depalletizing - Vision-guided palletizing uses a 3D camera mounted above the work area to give the robot real-time information about case position and orientation. The robot reads the scene, plans a collision-free pick path, and stacks without requiring every case to arrive in an identical position. Mixed pallet patterns, variable case sizes, and angled items are all manageable. Quality inspection - Cameras allow the robot to examine products for defects, dimensional inconsistencies, incorrect labeling, or surface flaws. Vision-guided inspection runs faster and more consistently than manual checking and produces a data trail that fixed sensors cannot. Painting and surface finishing - In finishing applications, vision helps the robot map the surface of a part before applying paint, powder coat, or adhesive. Blue Sky Robotics' AutoCoat system uses this approach to deliver even coverage regardless of part variation, reducing waste and rework on each run. Material handling and AS/RS - Vision-guided robots can identify products by label, shape, or barcode and route them correctly through automated storage and retrieval systems without manual intervention. Which Robots Work Best for Vision Guided Applications The right arm depends on the application, the payload requirements, and the workspace. Here is how the Blue Sky Robotics lineup maps to common vision-guided use cases. Fairino FR5  ($6,999) is the most accessible entry point for vision-guided automation. At 5 kg payload, it suits lightweight pick and place, inspection, and small-part assembly. It is a practical starting point for proof-of-concept deployments before scaling up. Fairino FR10  ($10,199) handles the majority of consumer goods and food and beverage applications. Its 10 kg payload and 1,450 mm reach cover a standard pallet footprint from a fixed mount and make it a strong general-purpose arm for palletizing and material handling cells. Fairino FR16  ($11,699) steps up to 16 kg for heavier cases, bags of product, or applications that require picking multiple items in a single grasp. The additional payload headroom also accommodates heavier end-of-arm tooling without limiting lift capacity. Fairino FR20  ($15,499) is the right choice for operations with heavier unit loads or applications that require the arm to reach the outer edges of a large work envelope. The 20 kg payload and extended reach mean fewer compromises on layout and case weight. For operations that need collaborative robot performance in a compact footprint, the UFactory Lite 6  ($3,500) is a strong option for benchtop inspection or lightweight pick and place alongside human workers. Blue Sky Robotics' automation software  handles the vision integration and mission logic that connects the camera's output to robot motion in a single platform, reducing the integration complexity that vision-guided applications can add. Where to Start If your operation has been managing variability manually and has assumed vision-guided automation is not within reach, that assumption is worth revisiting. The Automation Analysis Tool  evaluates your specific application for feasibility. The Cobot Selector  matches the right arm to your payload and task. And if you want to see how a vision guided robot handles your specific application before committing to hardware, book a live demo  with the Blue Sky Robotics team. Fixed automation used to be the only realistic option for most facilities. Increasingly it is not. To learn more about computer vision software visit Blue Argus . FAQ What is the difference between a vision guided robot and a traditional robot? A traditional robot follows pre-programmed coordinates and requires parts to be presented in a fixed, consistent position. A vision guided robot uses cameras and image-processing software to locate objects in real time and adjust its movements accordingly. This allows it to handle variable product placement, mixed SKUs, and changeovers without reprogramming. What industries use vision guided robots? Vision guided robots are used across manufacturing, logistics, food and beverage, healthcare, electronics, and automotive. Any application that involves variable product presentation, quality inspection, or mixed-SKU handling is a strong candidate. Do I need a systems integrator to deploy a vision guided robot? Not necessarily. Modern vision-guided automation platforms with graphical interfaces and code-free programming have lowered the barrier significantly. Blue Sky Robotics can help scope the right cell and support the setup without requiring a full integration engagement. How accurate is a vision guided robot? Accuracy depends on the camera resolution, lighting conditions, software calibration, and the robot arm's repeatability spec. Well-configured vision-guided systems routinely achieve sub-millimeter precision on structured applications and reliable performance on less controlled environments like bin picking.

  • Vision Guided Robotics 3D Cameras: When They Fall Short and What to Use Instead

    3D cameras are the default sensing technology in vision guided robotics. They produce detailed point clouds of the workspace, give robots the depth information they need to plan picks, and handle a wide range of standard applications reliably. For most palletizing, pick and place, and material handling deployments, a 3D camera is exactly the right tool. But not every application is standard. Transparent parts, highly reflective surfaces, fast-moving conveyors, outdoor environments, and extreme lighting conditions can all push a 3D camera past its reliable operating range. When that happens, the answer is not always to find a better 3D camera. Sometimes the answer is a different sensing approach entirely. This post covers how 3D cameras work in vision guided robotics, where they run into trouble, and which alternative sensing technologies are worth considering when the standard approach does not fit. How 3D Cameras Work in Vision Guided Robotics Most 3D cameras used in vision guided robotics systems operate on one of three principles: structured light, time-of-flight, or stereo vision. Structured light cameras project a known pattern of light onto the scene and measure how that pattern deforms across the surfaces it hits. The distortion encodes depth information, which the software converts into a point cloud. This approach produces high-resolution, accurate point clouds and is well suited to controlled indoor environments with consistent lighting. Time-of-flight cameras measure the time it takes for emitted light pulses to travel to the scene and return to the sensor. They are faster than structured light systems and less sensitive to ambient light variation, but typically produce lower-resolution depth data. Stereo vision cameras use two or more lenses offset from each other, similar to human eyes, to calculate depth by comparing the slightly different images each lens captures. Stereo systems work well in textured scenes with plenty of surface detail for the algorithm to match between frames. All three approaches share a common dependency: they need light to behave predictably when it strikes the surface of an object. When it does not, the point cloud suffers. Where 3D Cameras Fall Short Transparent and translucent materials - Clear objects allow light to pass through rather than reflecting it back to the sensor. The result is sparse, noisy, or entirely absent point cloud data on transparent surfaces. Blister packs, glass vials, clear pouches, and polybags are common examples. Translucent materials scatter light unpredictably and produce point clouds with the right general shape but significant noise at the most translucent surfaces. Highly reflective or metallic surfaces - Shiny surfaces reflect structured light away from the sensor at unpredictable angles, producing the same problem as transparent materials: missing or corrupted depth data. Polished metal parts, chrome components, and foil packaging are frequent offenders in manufacturing and electronics applications. Fast-moving targets - Structured light cameras require the scene to be still during the projection and capture cycle. On fast-moving conveyors, this means motion blur and frame misalignment that degrades point cloud quality. Time-of-flight cameras handle motion better but still have limits at high conveyor speeds. Outdoor or variable lighting environments - Structured light systems are sensitive to ambient infrared light, which means direct sunlight or rapidly changing outdoor lighting conditions can overwhelm the projected pattern and produce unreliable depth data. Very small or very large objects - Most 3D cameras are optimized for a specific working volume. Objects significantly smaller or larger than that volume may not produce enough usable data for reliable grasp planning. Alternatives Worth Considering When a 3D camera is not the right fit, several alternative sensing approaches have proven reliable in production vision guided robotics deployments. 2D machine vision - For applications where depth information is less critical than location and orientation in a flat plane, 2D cameras combined with strong image processing software can deliver reliable pick performance at lower cost and with faster cycle times than 3D systems. Barcode reading, label verification, and flat part picking are natural fits. Many vision guided robotics platforms support fusing 2D and 3D data, which allows the 2D image to fill gaps where the depth sensor falls short. Laser line profilers - A laser line profiler projects a single line of laser light across the scene and captures the reflected profile with a camera. As the object moves through the laser line, the system builds up a 3D profile scan over time. This approach handles reflective surfaces better than structured light cameras and is commonly used for bin picking of metal parts and quality inspection of shiny components. Structured light with specialized modes - Some 3D camera manufacturers have developed operating modes specifically designed for transparent and reflective materials. These modes adjust the projection and capture parameters to maximize usable return signal from difficult surfaces. They do not perform as well as standard modes on ideal targets, but they extend the range of materials a single camera can handle reliably. Tactile and force sensing - For applications where visual confirmation is not sufficient on its own, force-torque sensors and tactile grippers provide feedback during the pick itself. The robot can detect whether a grasp is secure, adjust grip pressure in real time, and respond to unexpected contact. This is particularly useful for handling fragile, deformable, or variably shaped objects where visual positioning alone does not guarantee a successful pick. Thermal imaging - In food processing and pharmaceutical applications where temperature is a quality indicator, thermal cameras can serve as an additional sensing layer alongside visual systems. They are not a replacement for 3D depth sensing but can flag items that fail temperature criteria before the robot picks them. AI-based 2D depth estimation - Advances in deep learning have made it possible to estimate depth from a single 2D image with increasing accuracy. While not yet at the precision level of dedicated 3D hardware for all applications, AI-based depth estimation is improving rapidly and is viable for applications where approximate depth is sufficient and hardware simplicity matters. Matching the Right Sensing Approach to the Right Robot The sensing technology is one half of the equation. The robot arm needs to match the payload and reach requirements of the application regardless of which camera or sensor is in use. For lightweight piece picking, inspection, and small-part handling where alternative sensing approaches are most common, the UFactory Lite 6  ($3,500) and Fairino FR5  ($6,999) cover the payload range with compact footprints suited to controlled picking cells. For mid-range applications including food and beverage handling and mixed-SKU logistics, the Fairino FR10  ($10,199) handles the majority of case weights and reaches a standard pallet footprint from a fixed mount position. For heavier payloads or applications requiring extended reach, the Fairino FR16  ($11,699) and Fairino FR20  ($15,499) provide the capacity without requiring a full industrial robot footprint. Blue Sky Robotics' automation software  is built to integrate with multiple sensing modalities, not just standard 3D cameras, which means the platform can support alternative configurations when the application demands it. Where to Start If your operation has run into the limits of standard 3D camera systems, or if you are evaluating a new application and want to make sure you are starting with the right sensing approach, the Automation Analysis Tool  helps scope feasibility for your specific case. The Cobot Selector  matches the right arm to your payload and workspace. And if you want to see how a vision guided robotics cell handles your specific material type or environment before committing to hardware, book a live demo  with the Blue Sky Robotics team. To learn more about computer vision software visit Blue Argus . 3D cameras solve most vision guided robotics problems. Knowing when they do not is what separates a system that works in the lab from one that runs reliably in production. FAQ Can a single camera handle both transparent and opaque objects? Some 3D cameras offer specialized modes for transparent materials that improve detection compared to standard modes, though performance is typically not as strong as on opaque surfaces. For mixed-material picking environments, fusing 2D and 3D data or using deep learning-based recognition trained on clear materials is often more reliable than relying on a single sensing mode. Is 2D vision sufficient for pick and place applications? It depends on the application. If parts are presented in a consistent orientation on a flat surface, 2D vision can be sufficient and offers faster cycle times and lower cost than 3D systems. Applications with variable part orientation in three dimensions, bin picking, or significant depth variation typically require 3D sensing. How does lighting affect vision guided robotics performance? Lighting is one of the most significant factors in vision system reliability. Structured light cameras are sensitive to ambient infrared, which makes outdoor or highly variable lighting conditions challenging. Consistent, controlled lighting is one of the most impactful things an integrator can get right in a vision guided robotics cell. What is the most common cause of vision system failures in production? Inconsistent part presentation is the most frequent culprit. Parts that arrive at different orientations than the system was configured for, or surfaces that change reflectivity with different batches or finishes, are common sources of detection failures. A robust vision system is designed to handle a defined range of variability, not infinite variation.

  • 3D Sensing Camera: How to Choose the Right One for Your Automation Cell

    When people start planning a vision-guided automation cell, the conversation usually jumps quickly to the robot arm: payload, reach, price. The camera often gets treated as an afterthought, something to sort out during integration. That is a mistake. The 3D sensing camera is the part of the system that determines what the robot knows. A robot arm paired with the wrong camera for the application will underperform regardless of how capable the arm itself is. Transparent parts will not be detected reliably. Reflective surfaces will return noisy point clouds. Fast conveyors will blur the scan. The whole cell will run below its potential because the sensing layer was not matched to the job. This post is not about vision-guided robotics in general. It is specifically about how to think through 3D sensing camera selection: the three main technologies, what each one is actually good at, where each one struggles, and how to match the right camera to your specific application before you commit to hardware. The Three Main 3D Sensing Technologies There is no single 3D sensing camera technology that is best for every application. The right choice depends on your surface types, lighting conditions, required resolution, and cycle time. Here is how the three main approaches compare. Structured light - A structured light camera projects a known pattern, typically a grid or series of stripes, onto the scene and captures how that pattern deforms across the surfaces it hits. The deformation encodes depth information, which the software converts into a point cloud. Structured light cameras produce the highest point cloud density and resolution of the three approaches, which makes them the default choice for bin picking, palletizing, and precision assembly. The trade-off is sensitivity: they require controlled indoor lighting and struggle on transparent and highly reflective surfaces where the projected pattern does not reflect cleanly back to the sensor. Time-of-flight - A time-of-flight camera emits pulses of light and measures how long each pulse takes to return to the sensor. Depth is calculated directly from that travel time. Time-of-flight cameras are faster than structured light systems and less sensitive to ambient lighting variation, which makes them better suited to applications with variable lighting or faster cycle time requirements. The trade-off is resolution: time-of-flight point clouds are typically less dense than structured light output, which limits their usefulness for high-precision inspection or fine-detail pick applications. Stereo vision - A stereo camera uses two offset lenses to calculate depth by comparing the slightly different images each lens captures, the same principle as human binocular vision. Stereo systems work well in textured scenes with plenty of surface detail for the algorithm to match across the two frames. They are often the most cost-effective 3D sensing option and can perform well outdoors where structured light systems are limited by ambient infrared. The trade-off is that stereo systems struggle on surfaces with low texture or uniform color, where there is not enough visual contrast for the matching algorithm to work accurately. How Surface Type Should Drive Your Camera Choice The single biggest factor in 3D sensing camera selection is often the surface properties of the objects you are handling. Getting this wrong is the most common reason vision-guided cells underperform in production. Matte, opaque surfaces - This is the ideal scenario for all three technologies. Structured light will give you the best resolution and point cloud density. If cycle time or lighting flexibility matters more than resolution, time-of-flight is a strong alternative. Reflective and metallic surfaces - Structured light cameras struggle here because the projected pattern scatters off shiny surfaces at unpredictable angles. A laser line profiler is typically the better choice for highly reflective parts, as the concentrated laser intensity overpowers much of the interference that degrades structured light performance. For moderately reflective surfaces, some structured light cameras offer dedicated modes that reduce sensitivity to specular reflection. Transparent and translucent materials - This is the hardest category for any light-based 3D sensing camera. Light passes through rather than reflecting cleanly, producing sparse or absent point cloud data. Some camera manufacturers offer specialized modes for transparent materials that improve detection, but performance is still limited compared to opaque surfaces. For high-mix picking environments that include clear objects, combining 2D image data with 3D depth data, or using deep learning-based recognition trained on transparent materials, typically produces more reliable results than relying on any single 3D sensing mode. Dark or light-absorbing surfaces - Very dark surfaces absorb the projected or emitted light rather than reflecting it, which produces the same sparse point cloud problem as transparent materials. Increasing the camera's exposure or using a higher-power light source can help, but there are limits. For extremely dark or light-absorbing materials, laser line profilers again tend to outperform area scan 3D cameras. Application-Specific Recommendations Bin picking of mixed parts - Structured light is the standard choice. High point cloud density gives the pick planning algorithm the detail it needs to identify individual parts in a cluttered bin, select the most accessible target, and plan a collision-free grasp path. For bins containing metallic or shiny parts, consider cameras with anti-reflection modes or supplement with a laser line profiler. Palletizing and depalletizing - Structured light or time-of-flight both work well for standard cardboard case palletizing. If cycle time is tight and case surfaces are consistent, time-of-flight offers faster scan cycles. If case sizes vary significantly or pallet patterns are complex, the higher resolution of structured light is worth the slightly longer scan time. Inline quality inspection - Resolution and repeatability are the priority. Structured light is the standard for dimensional inspection and surface defect detection. For inspection of reflective or metallic parts, a laser line profiler mounted above the conveyor typically delivers more reliable results at production speed. High-speed conveyor picking - Time-of-flight cameras handle motion better than structured light systems, making them better suited to picking from fast-moving conveyors where parts cannot be stopped for scanning. Some structured light systems offer high-speed modes, but time-of-flight is generally the safer starting point for conveyor speeds above moderate rates. Collaborative and space-constrained cells - Stereo cameras are often the most compact and cost-effective option for cells where space is limited and surface conditions are favorable. They are a practical choice for benchtop assembly, kitting, and inspection applications where the parts are textured and lighting is controlled. Which Robots Pair Well with a 3D Sensing Camera The camera tells the robot where things are. The arm determines what it can do with that information. For lightweight inspection, kitting, and piece picking applications, the UFactory Lite 6  ($3,500) and Fairino FR5  ($6,999) are strong starting points. For mid-range palletizing, bin picking, and food and beverage handling, the Fairino FR10  ($10,199) covers the majority of case weights and reaches a standard pallet footprint from a fixed mount. For heavier payloads or extended reach requirements, the Fairino FR16  ($11,699) and Fairino FR20  ($15,499) provide the capacity without requiring a full industrial footprint. Blue Sky Robotics' automation software  integrates 3D sensing camera output with robot motion in a unified platform, reducing the integration work that camera selection and configuration typically add to a deployment. Where to Start Camera selection is easier when you start with the application rather than the hardware. The Automation Analysis Tool  helps evaluate your specific environment and surface conditions for feasibility. The Cobot Selector  matches the right arm to your payload and workspace. And if you want to see how a specific 3D sensing camera and robot combination handles your parts before committing to hardware, book a live demo  with the Blue Sky Robotics team. The robot arm gets most of the attention in automation planning. The 3D sensing camera is what determines whether the system actually works. To learn more about computer vision software visit Blue Argus . FAQ Can one 3D sensing camera handle all surface types? No single camera technology handles all surface types equally well. Structured light cameras perform best on matte, opaque surfaces. Reflective and metallic parts are better handled by laser line profilers. Transparent materials are challenging for all light-based sensing technologies, though specialized camera modes and combined 2D and 3D sensing approaches can extend the range of what a single system handles reliably. How far away does a 3D sensing camera need to be from the scene? Working distance varies by camera model and application. Bin picking cells typically mount the camera 600mm to 1200mm above the bin. Palletizing cells mount higher to cover the full pallet footprint. Each camera has a specified working volume, and the cell needs to be designed so the objects of interest fall within that volume at the expected sensing distance. Does lighting affect 3D sensing camera performance? Yes, significantly for structured light systems. Direct sunlight and high ambient infrared can overwhelm the projected pattern and degrade point cloud quality. Controlled indoor lighting is the standard for structured light deployments. Time-of-flight and stereo systems are generally less sensitive to ambient lighting but still benefit from consistent conditions. How often does a 3D sensing camera need to be recalibrated? Calibration frequency depends on the camera type, mounting stability, and how much the environment changes. A well-mounted camera in a stable indoor environment may hold calibration for months. Cells subject to vibration, temperature swings, or frequent physical disturbance to the camera mount should be checked more regularly. Most vision software platforms include calibration routines that operators can run without specialized tools.

  • 3D Laser Profiler: How It Works and Which Cobot Is Right for the Job

    Most vision guided robotics systems rely on 3D area scan cameras to build a picture of the workspace. For the majority of applications, that approach works well. But there is a category of applications where area scan cameras consistently underperform: highly reflective surfaces, fast-moving parts, and inspection tasks that demand sub-millimeter surface accuracy. A 3D laser profiler is the sensing technology that fills that gap. It produces precise, high-resolution surface profiles that area scan cameras cannot match on difficult materials, and it does so at speeds compatible with production-line conveyor rates. The industries that handle metal parts, shiny packaging, and high-tolerance manufactured components have increasingly turned to laser profiling as the sensing layer in their vision guided automation cells. This post covers how a 3D laser profiler works, where it outperforms standard 3D cameras, and which robot arms Blue Sky Robotics recommends for laser profiler-based automation. What a 3D Laser Profiler Actually Is A 3D laser profiler, also called a laser line profiler or laser displacement sensor, works by projecting a line of laser light across the surface of an object and capturing the reflected profile with a camera sensor positioned at a known angle. The geometry of the setup allows the system to calculate the height of each point along the laser line with high precision. A single laser line produces a 2D cross-sectional profile of the surface. To build a complete 3D surface map, either the object moves through the laser line on a conveyor, or the profiler is moved across a stationary object. As the object passes through the beam, the system accumulates successive 2D profiles and stitches them together into a full 3D point cloud of the surface. The key difference from area scan 3D cameras is in how the laser line interacts with the surface. Because the profiler captures reflected light from a concentrated, high-intensity laser source rather than a broad structured light pattern, it is far less susceptible to interference from ambient lighting and far more capable of extracting usable data from reflective and metallic surfaces. Why 3D Laser Profilers Outperform Area Scan Cameras on Difficult Surfaces Standard 3D cameras struggle when light does not behave predictably at the surface of the object being scanned. Reflective and metallic parts scatter structured light at unpredictable angles, producing sparse or noisy point clouds that are not reliable enough for precise grasp planning or inspection. A 3D laser profiler sidesteps this problem for several reasons. Concentrated laser intensity - The high intensity of the laser line overpowers many of the ambient light interference effects that degrade structured light camera performance. This makes laser profilers viable in environments with variable or challenging lighting conditions. Controlled geometry - Because the profiler captures a single line at a time rather than an entire scene, the signal-to-noise ratio is significantly higher than in area scan systems. The system knows exactly where the laser line is and what angle the camera is positioned at, which makes depth calculations more stable even when the surface is partially reflective. High scanning resolution - 3D laser profilers can achieve micron-level height resolution along the profile axis, making them suitable for dimensional inspection and surface quality checks that area scan cameras cannot perform reliably. This level of precision is common in electronics manufacturing, precision machined parts inspection, and food product volume measurement. Compatibility with fast conveyors - Because the profiler scans line by line as the object moves, it does not require the scene to be stationary during capture. This makes it well suited to inline inspection and picking applications on moving conveyor lines. Where 3D Laser Profilers Deliver the Most Value Bin picking of metal parts - Reflective and metallic components are among the most common failure cases for area scan 3D cameras. A laser profiler's ability to extract reliable surface data from shiny parts makes it the preferred sensing approach for bin picking of machined components, stamped metal parts, and fasteners. Inline dimensional inspection - For manufacturers running tolerance-sensitive parts, a 3D laser profiler mounted above or alongside a conveyor can measure part dimensions, detect surface defects, and flag out-of-spec items in real time without slowing the line. This is standard in automotive, electronics, and precision manufacturing environments. Food volume and weight estimation - In food processing, laser profilers are used to measure the volume of products on a conveyor, which allows the system to estimate weight and sort items by size without contact. Poultry processing, fresh produce grading, and portioning applications all use this approach. Weld seam inspection - Robotic welding cells use 3D laser profilers to inspect weld bead geometry after the weld is complete. The profiler measures bead width, height, and continuity and flags welds that fall outside spec, replacing manual inspection with a consistent automated check. Packaging and fill level verification - Laser profilers can verify that containers are filled to the correct level and that packaging is sealed and undamaged before cases are closed and palletized. This is common in food and beverage, pharmaceutical, and consumer goods lines. Which Robots Work Best with a 3D Laser Profiler The robot arm in a laser profiler-based cell needs to match the payload and reach requirements of the specific task. The profiler itself is typically mounted above the conveyor or on the robot end-of-arm, depending on the application. For inspection and lightweight pick and place applications where the profiler is conveyor-mounted and the robot handles the response, the UFactory Lite 6  ($3,500) and Fairino FR5  ($6,999) cover the payload range with compact footprints suited to controlled cells. For bin picking of metal parts and heavier components where payload capacity matters, the Fairino FR10  ($10,199) handles the majority of part weights encountered in machined components and stamped metal applications. For applications involving heavier parts or end-of-arm profiler mounting that adds tool weight, the Fairino FR16  ($11,699) and Fairino FR20  ($15,499) provide the payload headroom without requiring a full industrial robot footprint. Blue Sky Robotics' automation software  connects sensor output to robot motion in a unified platform, reducing the integration work that laser profiler setups typically add to a deployment. Where to Start If your operation handles reflective parts, runs inline inspection, or has run into the limits of standard 3D cameras on difficult materials, a 3D laser profiler is worth a closer look. The Automation Analysis Tool  helps evaluate feasibility for your specific application. The Cobot Selector  matches the right arm to your payload and workspace. And if you want to see how a laser profiler-based cell handles your specific parts before committing to hardware, book a live demo  with the Blue Sky Robotics team. To learn more about computer vision software visit Blue Argus . Area scan cameras solve most vision problems. When they do not, a 3D laser profiler usually does. FAQ What is the difference between a 3D laser profiler and a 3D area scan camera? An area scan camera captures a full image of the scene in a single shot and uses structured light, time-of-flight, or stereo processing to calculate depth across the entire frame. A 3D laser profiler captures a single cross-sectional line at a time and builds a full 3D surface map as the object moves through the laser line. Laser profilers are faster and more accurate on reflective surfaces but require relative motion between the sensor and the object to produce a full scan. Can a 3D laser profiler handle transparent objects? Laser profilers perform better than area scan cameras on some translucent materials, but fully transparent objects remain challenging for any light-based sensing technology. For clear object handling, specialized camera modes, combined 2D and 3D sensing, or deep learning-based recognition are typically more effective. How fast can a 3D laser profiler scan? Scan speed depends on the profiler model and the required resolution. High-end laser profilers can capture thousands of profiles per second, making them compatible with fast conveyor lines in food processing and packaging applications. Lower-speed models are sufficient for most inspection and bin picking deployments. Do I need a systems integrator to deploy a 3D laser profiler cell? Not necessarily. Blue Sky Robotics can help scope the right sensing and robot combination for your application and support the setup without requiring a full integration engagement. The Automation Analysis Tool  is a good starting point for evaluating your specific case.

  • 3D Robot Vision: How It Works and Which Cobot Is Right for the Job

    A robot arm without vision is a tool that repeats. It executes the same motion to the same coordinates on every cycle, and it depends entirely on the surrounding environment staying exactly the same. That works in highly controlled, high-volume lines built around a single product. It does not work in the mixed-SKU, variable-presentation environments that most manufacturers and distributors actually operate in. 3D robot vision changes that dynamic. By equipping a robot with the ability to see its workspace in three dimensions, the system can locate objects wherever they are, understand how they are oriented, plan a grasp path, and execute the pick without any of it being hard-coded. The robot adapts to the environment instead of demanding the environment adapt to it. The technology has matured significantly in recent years. The cameras, software, and integration platforms that power 3D robot vision are more capable, more affordable, and easier to deploy than they have ever been. Fairino cobots start at $6,999, and a complete 3D vision-guided cell is within reach for operations that previously assumed it was out of budget. This post covers how 3D robot vision works, where it delivers the most value, and which arms Blue Sky Robotics recommends for the job. What 3D Robot Vision Actually Is 3D robot vision is the combination of depth-sensing cameras, processing software, and a robot controller working together to give a robotic arm spatial awareness of its environment. Unlike a standard 2D camera, which produces a flat image with no depth information, a 3D vision system produces a point cloud: a dense map of three-dimensional coordinates representing every surface in the scene. The robot uses that point cloud to answer three questions before it moves: what is the object, where is it in space, and how should I approach it. The vision software identifies the target, calculates its position and orientation in all three axes, and generates a grasp pose the robot controller can execute. All of this happens in real time, often in under a second, before each pick. The depth data itself comes from one of several sensing technologies. Structured light cameras project a known pattern onto the scene and calculate depth from how it deforms. Time-of-flight cameras measure how long emitted light pulses take to return. Stereo cameras use two offset lenses to triangulate depth from the difference between their images. Each has trade-offs in speed, resolution, and sensitivity to surface type, and the right choice depends on the application. Why 3D Vision Makes Robots Dramatically More Useful The gap between a robot with 3D vision and one without is not incremental. It is the difference between a system that can handle one carefully controlled scenario and one that can handle the variability of a real production environment. Handling random part orientation - Without 3D vision, every part must arrive in a known position and orientation. With it, the robot scans the scene, identifies each part regardless of how it landed, and calculates the correct approach angle for a successful pick. Bin picking of randomly oriented parts is not possible without depth information. Adapting to changeovers without reprogramming - When a product changes, a 3D vision system identifies the new item and adjusts grasp parameters through a graphical interface. Operators do not need to rewrite robot paths or call an integrator. The system recognizes what it is looking at and responds accordingly. Enabling inspection at production speed - 3D robot vision can measure part dimensions, detect surface anomalies, and verify assembly completeness inline at conveyor speed. The same cell that picks can also inspect, reducing the need for separate quality control stations and the manual labor that goes with them. Supporting safe human-robot collaboration - 3D vision is also used to monitor the workspace around a collaborative robot. The system detects when a person enters the robot's operating zone and adjusts speed or halts motion automatically, enabling closer human-robot collaboration without physical barriers. Where 3D Robot Vision Delivers the Most Value Bin picking and kitting - This is the application where 3D robot vision has the highest impact. Parts in a bin are randomly oriented and often touching or overlapping. A 3D vision system identifies each part, selects the most accessible pick target, plans a collision-free path around neighboring parts, and executes the pick. Applications in machined parts, fasteners, electronics components, and consumer goods all benefit. Pick and place on moving conveyors - 3D vision allows a robot to track and pick items off a moving conveyor without requiring them to be stopped, aligned, or presented in a fixed position. The system updates its grasp calculation in real time as the conveyor moves, enabling higher throughput with less upstream fixturing. Palletizing and depalletizing - A 3D camera mounted above the work area gives the robot real-time information about case position and orientation on the conveyor and pallet surface. Mixed case sizes, angled items, and variable product presentation are all manageable. Blue Sky Robotics deploys vision-guided palletizing cells for operations across logistics, food and beverage, and manufacturing. Quality inspection - Surface defects, dimensional out-of-spec conditions, missing components, and incorrect label placement are all detectable with 3D robot vision at production speed. The system applies the same inspection standard on every part, every shift, with a data trail that manual inspection cannot produce. Painting and surface finishing - Blue Sky Robotics' AutoCoat system uses vision to map the surface of a part before the robot applies paint, powder coat, or adhesive. The robot adjusts its path to the actual geometry of each part rather than following a fixed spray pattern, reducing waste and rework. Which Robots Work Best with 3D Robot Vision The vision system tells the robot what to do. The arm determines what it is physically capable of doing. Matching the two correctly is what makes a cell reliable in production. For lightweight piece picking, inspection, and collaborative applications, the UFactory Lite 6  ($3,500) provides a compact, affordable entry point with the repeatability needed for vision-guided work alongside human operators. For general-purpose pick and place, bin picking, and mid-range palletizing, the Fairino FR5  ($6,999) and Fairino FR10  ($10,199) cover the majority of part weights and reach a standard pallet footprint from a fixed mount position. For heavier components, extended reach requirements, or end-of-arm tooling that adds weight to the payload calculation, the Fairino FR16  ($11,699) and Fairino FR20  ($15,499) provide the capacity without a full industrial robot footprint. Blue Sky Robotics' automation software  connects 3D vision output to robot motion in a unified platform, handling the integration layer that typically adds complexity and time to vision-guided deployments. Where to Start If your operation relies on manual picking, fixed automation, or has struggled to find a vision-guided solution that handles your specific environment, the Automation Analysis Tool  is a practical starting point for evaluating feasibility. The Cobot Selector  matches the right arm to your payload and workspace. And if you want to see how 3D robot vision handles your specific parts or application before committing to hardware, book a live demo  with the Blue Sky Robotics team. A robot without vision knows where things should be. A robot with 3D vision knows where they actually are. To learn more about computer vision software visit Blue Argus . FAQ What is the difference between 2D and 3D robot vision? 2D robot vision captures flat images and can identify objects, read labels, and detect surface features, but it cannot determine depth or three-dimensional orientation. 3D robot vision adds depth information, enabling the robot to locate objects in full three-dimensional space, handle variable part orientations, and perform tasks like bin picking that 2D vision cannot support. How accurate is 3D robot vision? Accuracy depends on the camera technology, working distance, calibration quality, and the reflectivity of the target surface. Well-configured structured light systems achieve sub-millimeter repeatability on standard opaque surfaces. Reflective, transparent, or translucent materials may require specialized camera modes or alternative sensing approaches to achieve reliable results. Can 3D robot vision handle multiple different parts in the same bin? Yes. Deep learning-based vision software can be trained to recognize and differentiate multiple part types in the same scene. The robot identifies each part individually, selects the best pick target based on accessibility and orientation, and adjusts its grasp parameters accordingly. This is common in kitting and mixed-SKU fulfillment applications. Do I need a dedicated integrator to deploy a 3D robot vision cell? Not necessarily. Modern vision-guided automation platforms with graphical interfaces and code-free configuration have significantly reduced the integration burden. Blue Sky Robotics can help scope the right cell and support the setup without requiring a full third-party integration engagement.

  • 3D Sensor Camera: What the Data Actually Tells Your Robot and Why It Matters

    Most conversations about 3D sensor cameras stay at the surface level. They cover the technology types, the specs, the price ranges. What they rarely cover is what the data a 3D sensor camera produces actually means for the robot receiving it, and why the quality of that data has a direct, measurable impact on what your automation cell can and cannot do. A 3D sensor camera does not just take pictures. It generates a continuous stream of spatial measurements that the robot uses to make decisions: where to reach, how to approach, whether a pick is viable, and when to stop and wait for better data. Understanding what good data looks like, what bad data costs you, and what factors in your environment affect data quality is what separates a cell that runs reliably in production from one that performs well in a demo and struggles on the floor. This post takes a data-first look at 3D sensor cameras: what they produce, what degrades it, and how to build a cell around a sensor that gives your robot what it actually needs. What a 3D Sensor Camera Actually Produces The output of a 3D sensor camera is a point cloud: a collection of three-dimensional coordinates, each one representing a measured point on a surface in the scene. A dense, accurate point cloud gives the robot's vision software a precise spatial model of what is in front of it. A sparse, noisy, or incomplete point cloud forces the software to make assumptions, lower its confidence thresholds, or fail to generate a valid grasp pose entirely. Point cloud quality is not binary. It exists on a spectrum, and the practical consequences of moving along that spectrum are significant. Dense point clouds - High point density means more measured points per square centimeter of surface. This gives the pick planning algorithm more data to work with when identifying object boundaries, calculating surface normals for grasp orientation, and distinguishing between objects that are close together or partially overlapping. Dense point clouds are what structured light cameras produce under ideal conditions, and they are what bin picking and precision assembly applications require. Sparse point clouds - When surface properties, lighting conditions, or sensing distance degrade the return signal, the camera captures fewer valid points. The software can often still generate a grasp pose from a sparse cloud, but confidence is lower and the risk of a failed or inaccurate pick increases. Sparse clouds are common on dark, reflective, or transparent surfaces, and at the edges of the camera's working volume. Noisy point clouds - Noise refers to measured points that are in the wrong position, typically caused by multipath reflections, ambient light interference, or surface scattering. A noisy point cloud is often worse than a sparse one because incorrect data can actively mislead the pick planning algorithm rather than simply giving it less to work with. What Degrades 3D Sensor Camera Data in Production Understanding what causes point cloud quality to drop is more useful than knowing what a sensor produces under ideal lab conditions. Production environments are not ideal, and the gap between lab performance and floor performance is where most vision-guided cell problems originate. Surface properties - Matte, opaque surfaces reflect structured light cleanly and produce the densest point clouds. Shiny, reflective, or metallic surfaces scatter light at unpredictable angles and produce sparse or noisy data. Transparent surfaces let light pass through rather than reflecting it, producing little to no usable depth data. Dark, light-absorbing surfaces behave similarly to transparent ones: the sensor does not get enough return signal to measure reliably. Ambient lighting - Structured light cameras are sensitive to ambient infrared, which means sunlight, certain industrial lighting types, and proximity to other infrared sources can interfere with the projected pattern and degrade point cloud quality. Controlling the lighting environment around a 3D sensor camera is one of the most impactful steps in building a reliable cell. Object motion - Structured light cameras require the scene to be stationary during the capture cycle. On fast-moving conveyors, this means motion blur and frame misalignment that smear the point cloud. Time-of-flight cameras handle motion better, but they have their own speed limits. Any 3D sensor camera used in a conveyor picking application needs to be matched to the conveyor speed. Sensing distance and angle - Every 3D sensor camera has a specified working volume. Objects too close, too far, or at extreme angles relative to the sensor may produce degraded data even on ideal surfaces. Cell design needs to account for the full range of object positions and orientations the sensor will encounter, not just the nominal center case. Temperature and vibration - In industrial environments, temperature swings can affect sensor calibration over time. Vibration from nearby machinery can introduce measurement instability. Both factors are worth accounting for in mounting design and calibration frequency planning. What Good Data Quality Enables The reason point cloud quality matters is concrete and operational. Here is what improves directly when the 3D sensor camera is producing reliable data. Higher pick success rates - A robot acting on a dense, accurate point cloud picks more reliably on the first attempt. Fewer failed picks mean less downtime, fewer error recovery cycles, and higher throughput per shift. Faster cycle times - When data quality is high, the vision software reaches a confident grasp pose decision quickly. When it is low, the software either takes longer to process or requests a rescan. Consistent data quality is what keeps cycle times tight across a full production run. Smaller safety margins - With precise spatial data, the robot can approach objects more closely and confidently without needing large clearance buffers to account for positional uncertainty. This matters in bin picking, where parts are often close together and the margin for a clean pick is narrow. Reliable inspection results - For inspection applications, point cloud accuracy directly determines whether the system can detect defects, measure dimensions, and flag out-of-spec parts at the required tolerance. Noisy data produces false positives and missed defects in roughly equal measure. Which Robots Pair Well with a 3D Sensor Camera The data quality produced by the sensor sets the ceiling on what the robot can do. The arm needs to match the physical requirements of the task. For lightweight picking, inspection, and kitting where data quality and precision matter most, the UFactory Lite 6  ($3,500) and Fairino FR5  ($6,999) offer the repeatability needed to act on high-quality point cloud data accurately. For mid-range palletizing, bin picking, and material handling, the Fairino FR10  ($10,199) handles the majority of case weights and reaches a standard pallet footprint from a fixed mount. For heavier payloads or extended reach requirements, the Fairino FR16  ($11,699) and Fairino FR20  ($15,499) provide the capacity without requiring a full industrial robot footprint. Blue Sky Robotics' automation software  connects 3D sensor camera output to robot motion in a unified platform, handling the data pipeline between the sensor and the controller that typically adds integration complexity to vision-guided deployments. Where to Start If your operation is evaluating a vision-guided cell and wants to make sure the sensing layer is matched to the job before committing to hardware, the Automation Analysis Tool  is a practical starting point. The Cobot Selector  matches the right arm to your payload and workspace. And if you want to see how a 3D sensor camera performs on your specific parts and environment before you buy anything, book a live demo  with the Blue Sky Robotics team. To learn more about computer vision software visit Blue Argus . The robot arm is what people see. The 3D sensor camera is what makes the robot worth watching. FAQ What is the difference between a 3D sensor camera and a 3D sensing camera? The terms are used interchangeably in the industry. Both refer to cameras or sensor systems that capture depth information in addition to standard image data, producing three-dimensional point clouds that vision software can use for object localization, grasp planning, and inspection. How do I know if my point cloud quality is good enough for my application? The practical test is pick success rate and cycle time consistency in your actual environment, not just under demo conditions. A well-configured system should achieve high first-attempt pick success on your specific parts and surfaces. If success rates are lower than expected, point cloud quality is usually the first place to investigate before looking at robot arm calibration or software settings. Can a 3D sensor camera be mounted on the robot arm instead of above the cell? Yes. Eye-in-hand mounting, where the camera is attached to the robot's end-of-arm, allows the sensor to get closer to objects and scan from multiple angles. This is useful for bin picking of small parts and inspection applications where a fixed overhead mount cannot capture the required detail. The trade-off is added weight on the end-of-arm, which reduces the effective payload capacity of the arm. How often does a 3D sensor camera need recalibration? In a stable indoor environment with consistent temperature and a secure mount, calibration can hold for months. Cells subject to vibration, temperature variation, or physical disturbance to the camera mount should be checked more regularly. Most vision platforms include operator-accessible calibration routines that do not require specialized tools or external support.

bottom of page