3D Sensing Camera: How to Choose the Right One for Your Automation Cell
- Apr 8
- 6 min read
Updated: Apr 13
When people start planning a vision-guided automation cell, the conversation usually jumps quickly to the robot arm: payload, reach, price. The camera often gets treated as an afterthought, something to sort out during integration. That is a mistake.
The 3D sensing camera is the part of the system that determines what the robot knows. A robot arm paired with the wrong camera for the application will underperform regardless of how capable the arm itself is. Transparent parts will not be detected reliably. Reflective surfaces will return noisy point clouds. Fast conveyors will blur the scan. The whole cell will run below its potential because the sensing layer was not matched to the job.
This post is not about vision-guided robotics in general. It is specifically about how to think through 3D sensing camera selection: the three main technologies, what each one is actually good at, where each one struggles, and how to match the right camera to your specific application before you commit to hardware.
The Three Main 3D Sensing Technologies
There is no single 3D sensing camera technology that is best for every application. The right choice depends on your surface types, lighting conditions, required resolution, and cycle time. Here is how the three main approaches compare.
Structured light -Â A structured light camera projects a known pattern, typically a grid or series of stripes, onto the scene and captures how that pattern deforms across the surfaces it hits. The deformation encodes depth information, which the software converts into a point cloud. Structured light cameras produce the highest point cloud density and resolution of the three approaches, which makes them the default choice for bin picking, palletizing, and precision assembly. The trade-off is sensitivity: they require controlled indoor lighting and struggle on transparent and highly reflective surfaces where the projected pattern does not reflect cleanly back to the sensor.
Time-of-flight -Â A time-of-flight camera emits pulses of light and measures how long each pulse takes to return to the sensor. Depth is calculated directly from that travel time. Time-of-flight cameras are faster than structured light systems and less sensitive to ambient lighting variation, which makes them better suited to applications with variable lighting or faster cycle time requirements. The trade-off is resolution: time-of-flight point clouds are typically less dense than structured light output, which limits their usefulness for high-precision inspection or fine-detail pick applications.
Stereo vision -Â A stereo camera uses two offset lenses to calculate depth by comparing the slightly different images each lens captures, the same principle as human binocular vision. Stereo systems work well in textured scenes with plenty of surface detail for the algorithm to match across the two frames. They are often the most cost-effective 3D sensing option and can perform well outdoors where structured light systems are limited by ambient infrared. The trade-off is that stereo systems struggle on surfaces with low texture or uniform color, where there is not enough visual contrast for the matching algorithm to work accurately.
How Surface Type Should Drive Your Camera Choice
The single biggest factor in 3D sensing camera selection is often the surface properties of the objects you are handling. Getting this wrong is the most common reason vision-guided cells underperform in production.
Matte, opaque surfaces -Â This is the ideal scenario for all three technologies. Structured light will give you the best resolution and point cloud density. If cycle time or lighting flexibility matters more than resolution, time-of-flight is a strong alternative.
Reflective and metallic surfaces -Â Structured light cameras struggle here because the projected pattern scatters off shiny surfaces at unpredictable angles. A laser line profiler is typically the better choice for highly reflective parts, as the concentrated laser intensity overpowers much of the interference that degrades structured light performance. For moderately reflective surfaces, some structured light cameras offer dedicated modes that reduce sensitivity to specular reflection.
Transparent and translucent materials -Â This is the hardest category for any light-based 3D sensing camera. Light passes through rather than reflecting cleanly, producing sparse or absent point cloud data. Some camera manufacturers offer specialized modes for transparent materials that improve detection, but performance is still limited compared to opaque surfaces. For high-mix picking environments that include clear objects, combining 2D image data with 3D depth data, or using deep learning-based recognition trained on transparent materials, typically produces more reliable results than relying on any single 3D sensing mode.
Dark or light-absorbing surfaces -Â Very dark surfaces absorb the projected or emitted light rather than reflecting it, which produces the same sparse point cloud problem as transparent materials. Increasing the camera's exposure or using a higher-power light source can help, but there are limits. For extremely dark or light-absorbing materials, laser line profilers again tend to outperform area scan 3D cameras.
Application-Specific Recommendations
Bin picking of mixed parts -Â Structured light is the standard choice. High point cloud density gives the pick planning algorithm the detail it needs to identify individual parts in a cluttered bin, select the most accessible target, and plan a collision-free grasp path. For bins containing metallic or shiny parts, consider cameras with anti-reflection modes or supplement with a laser line profiler.
Palletizing and depalletizing -Â Structured light or time-of-flight both work well for standard cardboard case palletizing. If cycle time is tight and case surfaces are consistent, time-of-flight offers faster scan cycles. If case sizes vary significantly or pallet patterns are complex, the higher resolution of structured light is worth the slightly longer scan time.
Inline quality inspection -Â Resolution and repeatability are the priority. Structured light is the standard for dimensional inspection and surface defect detection. For inspection of reflective or metallic parts, a laser line profiler mounted above the conveyor typically delivers more reliable results at production speed.
High-speed conveyor picking -Â Time-of-flight cameras handle motion better than structured light systems, making them better suited to picking from fast-moving conveyors where parts cannot be stopped for scanning. Some structured light systems offer high-speed modes, but time-of-flight is generally the safer starting point for conveyor speeds above moderate rates.
Collaborative and space-constrained cells -Â Stereo cameras are often the most compact and cost-effective option for cells where space is limited and surface conditions are favorable. They are a practical choice for benchtop assembly, kitting, and inspection applications where the parts are textured and lighting is controlled.
Which Robots Pair Well with a 3D Sensing Camera
The camera tells the robot where things are. The arm determines what it can do with that information. For lightweight inspection, kitting, and piece picking applications, the UFactory Lite 6Â ($3,500) and Fairino FR5Â ($6,999) are strong starting points.
For mid-range palletizing, bin picking, and food and beverage handling, the Fairino FR10Â ($10,199) covers the majority of case weights and reaches a standard pallet footprint from a fixed mount. For heavier payloads or extended reach requirements, the Fairino FR16Â ($11,699) and Fairino FR20Â ($15,499) provide the capacity without requiring a full industrial footprint.
Blue Sky Robotics' automation software integrates 3D sensing camera output with robot motion in a unified platform, reducing the integration work that camera selection and configuration typically add to a deployment.
Where to Start
Camera selection is easier when you start with the application rather than the hardware. The Automation Analysis Tool helps evaluate your specific environment and surface conditions for feasibility. The Cobot Selector matches the right arm to your payload and workspace. And if you want to see how a specific 3D sensing camera and robot combination handles your parts before committing to hardware, book a live demo with the Blue Sky Robotics team.
The robot arm gets most of the attention in automation planning. The 3D sensing camera is what determines whether the system actually works. To learn more about computer vision software visit Blue Argus.
FAQ
Can one 3D sensing camera handle all surface types?
No single camera technology handles all surface types equally well. Structured light cameras perform best on matte, opaque surfaces. Reflective and metallic parts are better handled by laser line profilers. Transparent materials are challenging for all light-based sensing technologies, though specialized camera modes and combined 2D and 3D sensing approaches can extend the range of what a single system handles reliably.
How far away does a 3D sensing camera need to be from the scene?
Working distance varies by camera model and application. Bin picking cells typically mount the camera 600mm to 1200mm above the bin. Palletizing cells mount higher to cover the full pallet footprint. Each camera has a specified working volume, and the cell needs to be designed so the objects of interest fall within that volume at the expected sensing distance.
Does lighting affect 3D sensing camera performance?
Yes, significantly for structured light systems. Direct sunlight and high ambient infrared can overwhelm the projected pattern and degrade point cloud quality. Controlled indoor lighting is the standard for structured light deployments. Time-of-flight and stereo systems are generally less sensitive to ambient lighting but still benefit from consistent conditions.
How often does a 3D sensing camera need to be recalibrated?
Calibration frequency depends on the camera type, mounting stability, and how much the environment changes. A well-mounted camera in a stable indoor environment may hold calibration for months. Cells subject to vibration, temperature swings, or frequent physical disturbance to the camera mount should be checked more regularly. Most vision software platforms include calibration routines that operators can run without specialized tools.







