
BLUE ARGUS
From Prompt to Pick Point
A modular computer vision platform for robotics. Start with what you need, add capabilities as your application demands.
THE PROBLEM
Vision is the most common failure point in process automation.
Integrating a camera, building a computer vision pipeline, and translating image data into robot coordinates typically takes weeks of custom development followed by lengthy model training cycles specific to each part or SKU. Most integrators avoid it entirely. Blue Argus removes that barrier. Everything ships together, pre-configured and ready to integrate, with no model training or fine-tuning required for most applications.
Natural language prompt
Image segmentation
3D point identification
Robot pick point
HOW IT WORKS
Zero training required for most applications
Blue Argus leverages large pre-trained vision models and can recognize parts and objects it has never seen before on day one, with no training pipeline. If your customer has more than a handful of SKUs, or if their parts change frequently, traditional vision systems become unmanageable. For the vast majority of applications, we’ve removed that problem entirely.

Key System Differentiators
Modular Software Platform
Start with base prompt to pick point capability and add orientation detection, enhanced depth, or faster inference as your application demands.
Any End Effector
The General Vision Kit works with any end effector the integrator already has. The Suction-Enabled Kit adds a complete pneumatic picking system.
Integrates with Python SDK
The system is ready to integrate with any robot arm exposing a PythonSDK. Python sample code is included with the kit.
Path Planning Compatible
Compatible out of the box with standard path planning frameworks including MoveIt. Integrates into existing robot programming workflows.
THE CORE SDK
Vision SDK: Prompt To Pick Point
The Vision SDK is the system's CoreCV capability. Begin by describing the target object in natural language, and the SDK segments the camera image and returns its 3D center point in robot coordinate space.
Why does this matter for high-mix environments?
Traditional vision systems require training data and fine-tuning for every new part. Blue Argus leverages large pre-trained vision models — it recognizes novel parts on day one with no retraining when the product mix changes. This is the reason most integrators give up on vision. We’ve removed this barrier completely.
Integration in 5 Steps
1
Mount the kit to the robot arm
The wrist mount positions the 3D depth-sensing camera at the end of the arm alongside your end effector. Compatible with standard robot wrist flanges.
2
Connect via Ethernet
Camera is PoE-powered — run the included Cat6 cable to the included PoE switch. No separate power supply or custom wiring required.
3
Run the SDK on the included compute unit
Vision SDK runs locally on the High-Performance Compute Unit. No cloud dependency, no external GPU required.
4
Pass a natural language prompt
Describe the object to pick via the Python API. SDK segments the image and locates the target — no per-SKU training needed.
5
Receive the 3D pick point — ready to execute
SDK returns the 3D center point in robot coordinate space. Pass directly to the robot's motion controller or path planning framework.





Suction Gripper
The Suction-Enabled Kit adds a complete pneumatic picking system, ready for pick-and-place and palletizing applications straight out of the box.
The Kit Includes:
-
3D depth camera
-
Universal wrist mount
-
High-Performance Compute Unit
-
PoE switch (4-port, 67W)
-
Cat6 Ethernet cabling — all included
-
Vacuum end effector + height-compensating spring buffer (50mm throw)
-
SCPSi compact ejector with IO-Link (M12-5, 5m cable)
-
Ready-to-integrate pneumatic supplies and vacuum hardware
-
Compatible with standard path planning frameworks


