No items found.

As Intrinsic works to democratize access to robotics and bring the value of AI into the physical world, we’re encouraged by manufacturers investing heavily in AI to realize never-before-possible value. As the open source robotics conference ROSCon kicks off in Singapore, we’re announcing the first “AI for Industry Challenge” for developers, organized in partnership with Open Robotics. For the first time, developers will be invited to use the best of open source tools and simulators to train an AI model, develop their solution with Intrinsic Flowstate, and test it on a real world workcell at Intrinsic HQ – bridging the gap from simulated to real world conditions. We see a near future where developers, including those from the ROS community, can build industry-grade robotics solutions with ease – leveraging new, old and open tools in innovative ways.  

We’re excited to share that this challenge will give participants access to the Intrinsic Vision Model, our most powerful model for industrial perception. A breakthrough in AI-powered perception that recently set new record-breaking benchmarks with state-of-the-art speed and precision, IVM was recently validated by the Benchmark for 6D Object Pose Estimation (BOP) at the leading AI-Vision conference, International Conference on Computer Vision (ICCV 2025). Together, these developments will help enable a next-generation developer ecosystem for intelligent robotics.

Announcing a new type of robotics challenge

At Intrinsic, we’ve been accelerating AI for robotics, so more businesses and manufacturers can make products and services that people value. A critical pillar of that is ensuring untapped developer talent around the world has the opportunity to bridge the digital and physical worlds in useful and beneficial ways. The AI for Industry Challenge is open for all to apply, and represents a new type of opportunity for developers and roboticists to use the tools they know and love along with the Intrinsic’s platform and AI to create never-before-possible solutions. AI-enabled robotics is needed by millions of businesses, who are dealing with labor shortages, and rely on human labor for repetitive, difficult tasks that could be automated.

In 2026, the AI for Industry Challenge theme is electronics assembly – an industry experiencing massive growth but also some of the most persistent robotics issues in manufacturing. These include the ability to see, grip and insert cable connectors accurately, with just the right amount of force. Everything from server trays to solar panels require a tremendous amount of manual cable handling and insertion, which are time intensive, repetitive tasks for people to perform. The challenge, which has a $180,000 prize pool, will focus on solving a similar problem that’s representative of real-world factory use cases, including tasks that increase in complexity from cable insertion to cable management.

Participants will develop and train an AI model for ‘dextrous manipulation’ of electrical components as part of an assembly process, using a mix of open source tools including Gazebo or other powerful simulation engines like Google DeepMind’s MuJoCo and NVIDIA’s Isaac Sim. The top 30 model submissions, evaluated through Gazebo, will unlock access to Intrinsic’s developer environment, Flowstate and the Intrinsic Vision Model - a state-of-the-art foundation model for perception tasks. 

While this year’s challenge is electronics assembly, this type of unsolved problem is common to dozens of industries, from aerospace to healthcare – who will increasingly look to talented developers to deliver autonomous robotic solutions that work. You can sign up for the challenge here, which officially kicks off on February 11, 2026, with registration closing April 17, 2026.

Breakthrough in functional intelligence: Intrinsic Vision Model (IVM) wins seven of eleven benchmarks at ICCV 2025

For decades, robots’ ability to make sense of the world around them — known as perception — has been hindered by the complexity of everyday production environments. Variables like inconsistent lighting and complex materials, including reflective or transparent surfaces, are particularly difficult for AI to handle. As a result, most of today’s automation is ‘blind’ and fails to adapt to changing parts or their surroundings. In a milestone for visual intelligence, we’re introducing the Intrinsic Vision Model (IVM), a state-of-the-art, industrial-grade foundation model that leverages an expanding set of specialized transformers for everything from pose detection to tracking, segmentation and point cloud generation. At ICCV 2025, IVM placed first in seven of the eleven BOP benchmark challenges, spanning industrial objects as well as more general household items. IVM was recognized for its progress in removing the need for application-specific training, outperforming approaches that used that approach, and delivering sub-millimeter accuracy with cheaper RGB cameras.

  • Hardened for real-world industrial use: Differentiated as CAD-native, IVM is setting a new standard for industrial perception tasks. Like other Intrinsic capabilities, which are like combinable blocks of robotic behavior for motion and grasping, this model can be used with third-party AI models to build solutions. IVM works robustly with different lighting, objects, and materials across environments.

  • A new standard in sub millimeter accuracy and zero training time: IVM includes two significant improvements compared with other models. First, by teaching IVM multi-view, pixel-perfect reasoning, we achieved sub-mm accuracy without expensive depth cameras. Second, by training the model to reason about CAD models, we also achieved zero training time, enabling a whole new generation of manufacturing automation that can adapt to new parts on-the-fly. Because of its high levels of accuracy, our model makes tight-fit insertions possible without the use of fixtures, helping reimagine how things are made. Because better perception allows for better motion, grasping and insertion, this creates new possibilities for high-mix, low-volume applications as part types are constantly changing.

  • State-of-the-art results with lower hardware costs: IVM achieved our sub-mm accurate results using standard RGB cameras (~$500-$1,000), reducing hardware costs by 5X to 20X compared with the industry-standard specialized depth-sensing systems. The improved ability to leverage cheaper hardware and deliver higher performance – thanks to AI – further shifts the economics of building, integrating and managing robotic systems. Even with RGB cameras, IVM remains highly robust even in challenging lighting and material properties such as reflective or transparent surfaces, enabling reliable performance where systems previously failed and solving long-standing manufacturing challenges like high-speed, tight-fit insertions.

As we continue to build the Intrinsic platform with an aim of bringing the most valuable benefits of AI to industrial-grade production, we’re supporting the best models and algorithms to make it easier, faster, and more affordable for businesses and developers of all kinds to benefit from advances in industrial robotics. We’re excited to see the incredible advances our partners and the global developer community make with AI.