top of page
Blue Argus Demo
10:56
Blue Argus Demo
Learn about Blue Sky Robotics' Computer Vision Package: Blue Argus!
Features: Houston
00:33
Features: Houston
Blue Sky Robotics' low-code automation platform
Features: Analytics Dashboard
00:56
Features: Analytics Dashboard
Blue Sky Robotics' control center analytics dashboard
Meet the "Hands" of your robot!
00:30
Meet the "Hands" of your robot!
Meet the "Hands" of your robot! 🤖 End effectors are how robotic arms interact with their world. We’re breaking down the standard UFactory gripper—the versatile go-to for most of our automation tasks. 🦾✨ #UFactory #xArm #Robotics #Automation #Engineering #TechTips #shorts Learn more at https://f.mtr.cool/jenaqtawuz

3D Vision Technologies: A Plain-Language Guide for Manufacturers

  • Apr 8
  • 5 min read

"3D vision" is used as if it describes a single thing. It does not. There are at least four distinct technologies that produce 3D spatial data, each using different physics, different hardware, and suited to different industrial applications.


Choosing between them without understanding those differences leads to cells that underperform or fail entirely on the parts they were supposed to handle.

This post explains the four core 3D vision technologies used in industrial robotics, how each one works, what each one is good at, and where each one falls short. It is a technology comparison built around practical manufacturing decisions rather than academic detail.


Why the Technology Choice Matters


Every 3D vision technology produces a point cloud: a spatial map of the scene where each point has X, Y, and Z coordinates. The differences lie in how that point cloud is generated and how reliable it is across different surface types, speeds, and lighting conditions.


A structured light camera that excels at mapping reflective metal parts will struggle to keep pace with a high-speed conveyor. A Time-of-Flight sensor that covers a large area at high frame rates may not deliver the accuracy needed for precision inspection. A stereo camera that works well on textured plastic parts will produce noisy data on a shiny aluminum casting.


Matching the technology to the application is the most consequential decision in designing a 3D vision cell. Getting it right means reliable production performance. Getting it wrong means a cell that works in the demo and fails in the plant.


Structured Light


Structured light is the dominant technology in industrial 3D vision for demanding applications. It works by projecting a known pattern of light onto the scene, typically a series of stripes or a more complex coded sequence, and measuring how that pattern deforms as it conforms to the surfaces of objects in the scene.


The deformation of the projected pattern encodes depth with high precision. A flat surface produces an undistorted pattern. A curved surface bends it. An edge creates a sharp discontinuity. Processing software reconstructs the 3D geometry from these deformations, producing a dense and accurate point cloud.


Structured light handles the surface conditions that defeat most other technologies: reflective metals, dark rubber, low-contrast materials, and geometrically complex parts. This is why it is the standard choice for industrial bin picking, palletizing, and precision inspection.


The primary tradeoff is acquisition time. Projecting and capturing the pattern sequence takes more time than single-shot depth methods, which means structured light systems require parts to be relatively still during the scan.


Stereo Vision


Stereo vision calculates depth by comparing two images captured simultaneously from two cameras mounted side by side, similar to how human eyes work. For any point visible in both images, the horizontal shift between its position in the left image and the right image encodes its distance from the cameras. The software processes these disparities across the full image to build a depth map.


Stereo cameras are compact, affordable, and fast. They do not require a projector, so they are not affected by ambient light interference in the same way structured light systems can be. For standard industrial parts with sufficient surface texture under reasonable lighting conditions, they produce point clouds accurate enough for pick and place, machine tending, and general-purpose material handling.


The limitation is surface quality. Featureless surfaces, highly reflective materials, and dark objects produce inconsistent disparity maps that degrade point cloud quality. The Intel RealSense D435 and Luxonis OAK-D-Pro-PoE are the most widely deployed stereo cameras in cobot applications and are natively supported by UFactory's vision SDK.


Time-of-Flight


Time-of-Flight sensors measure depth by emitting pulses of infrared light and timing how long each pulse takes to return from the scene. Since light travels at a known speed, the return time directly encodes distance. The sensor builds a full depth map by measuring return times across its entire field of view simultaneously, typically at high frame rates.


ToF sensors are the right choice when speed and area coverage matter more than fine detail. They produce depth maps continuously at 30 frames per second or faster, making them well suited for tracking moving objects on conveyors, monitoring large workspaces for safety applications, and any scenario where real-time depth awareness across a wide field of view is the primary requirement.


The tradeoff is resolution and per-point accuracy. ToF sensors produce lower-density depth maps than structured light and typically have higher per-point noise, which limits their usefulness for precision inspection or fine-feature detection.


Laser Profiling


Laser profilers are the precision tier of 3D vision. They project a single laser line across the target and capture how that line deforms as the target moves through the sensor's field of view, building up a 3D profile scan line by line at high resolution.


This approach achieves the highest measurement accuracy of any 3D vision technology, with Z repeatability reaching 0.2 micrometers on high-end industrial models. That level of precision is what enables connector pin height inspection, battery cell lid measurement, PCB flatness verification, and other applications where features measured in microns need to be verified reliably at production speed.


Laser profilers are not general-purpose 3D cameras. They scan along a single axis, which means the part or sensor must move relative to each other during measurement. They are best deployed at dedicated inline inspection stations where parts are conveyed through the scan zone rather than in a general-purpose robot guidance role.


Choosing the Right Technology


The decision map is straightforward once the application is defined clearly.

For demanding bin picking, palletizing, and inspection of reflective or dark industrial parts, structured light is the appropriate choice. For entry-level pick and place and machine tending with standard parts, stereo vision is more affordable and fully capable. For high-speed conveyor tracking or large-area workspace monitoring, Time-of-Flight delivers the frame rate and coverage needed. For dimensional inspection of small features where sub-millimeter accuracy is required, laser profiling is the right tool.


Many production cells combine technologies. A stereo camera guides the robot for general handling. A laser profiler sits at a dedicated inspection station for dimensional verification. The two operate in complementary roles that play to the strengths of each.


Blue Sky Robotics' Blue Argus platform ships as a complete kit built around a 3D depth camera with pre-trained vision software that requires no model training for most applications, removing the integration barrier that typically makes 3D vision deployment complex regardless of which sensor technology is used.

For the robot arm layer, the UFactory Lite 6 ($3,500) covers entry-level stereo vision applications. The Fairino FR5 ($6,999) and Fairino FR10 ($10,199) handle production-grade structured light and industrial 3D camera integrations across bin picking, palletizing, and inspection.


Getting Started


Use our Cobot Selector to match an arm and vision technology to your application. Browse our full UFactory lineup and Fairino cobots with current pricing, or book a live demo to see a 3D vision cell in action.


FAQ


What are the main 3D vision technologies used in robotics?

The four core technologies are structured light, stereo vision, Time-of-Flight, and laser profiling. Each uses different physics to capture depth data and has distinct strengths in terms of accuracy, speed, surface compatibility, and cost.


Which 3D vision technology is most accurate?

Laser profiling achieves the highest measurement accuracy, with Z repeatability reaching 0.2 micrometers on industrial models. Structured light is the most accurate general-purpose 3D imaging technology for demanding surfaces. Stereo vision is accurate enough for most robot guidance applications but less reliable on featureless or reflective surfaces.


Can I use multiple 3D vision technologies in the same robot cell?

Yes, and it is often the right approach. A stereo camera for robot guidance and a laser profiler for inline dimensional inspection is a common combination that uses each technology where it performs best.

bottom of page