This week at Automate 2024, North America’s largest robotics and automation trade show, we’re showcasing the Intrinsic platform and how we can help solution builders apply AI-enabled capabilities to industry solutions. We’re also sharing more about recent technical milestones we’ve hit working together with the visionary teams at NVIDIA and Google DeepMind Robotics.

Introducing our collaboration with NVIDIA

Working with the robotics team at NVIDIA, we have successfully tested NVIDIA robotics platform technologies, including NVIDIA Isaac Manipulator foundation models for a robot grasping skill with the Intrinsic platform.  This prototype features an industrial application specified by one of our partners and customers, Trumpf Machine Tools. This grasping skill, trained with 100% synthetic data generated by NVIDIA Isaac Sim, can be used to build sophisticated solutions that can perform adaptive and versatile object grasping tasks in sim and real. Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to complete the task using the foundation model and synthetic training data.

“With the latest AI foundation models, companies can program a variety of robot configurations that are able to generalize and interact with diverse objects inside real-world environments,” said Deepu Talla, Vice President of Robotics and Edge Computing, NVIDIA. “As the collaboration between Intrinsic and NVIDIA deepens we will be able to help companies scale and automate their industrial manufacturing operations.”

In the future, developers will be able to use ready-made universal grasping skills like these to greatly accelerate their robot programming. For the broader industry, this development shows how foundation models can have a profound impact, including making today's robot deployment challenges easier to manage at scale, enabling previously infeasible applications, reducing development costs, and increasing flexibility for end users.

Showcasing our work with Google DeepMind Robotics

For several years, we have been working with the robotics team at Google DeepMind, who are at the cutting edge of research and science, covering diverse robotic form factors, applications and verticals. Collaborating with their team and understanding each other’s needs has helped validate our decision to fully integrate AI into the Intrinsic platform. These are the tools that will help roboticists and developers solve hard, uneconomical, or seemingly intractable robotics and automation challenges in new ways. Together with Google DeepMind, we’ve demonstrated some novel and high value methods for robotic programming and orchestration — many of which have practical applications today.

  • Multi-robot motion planning with machine learning: Today, developers often spend anywhere from a few hours on simple applications to hundreds and even thousands of hours on complex multi-robot applications. With Intrinsic’s and Google DeepMind’s efforts on a universal AI-based robot motion planner for one or multiple robots sharing the same workspace, this work can be done automatically. Instead of traditional motion-planning algorithms, we use a model trained with synthetic data from a physics engine. Its input requires models for geometry, robot kinematics, robot dynamics, and a robot task description. Trained in the cloud, the output is a model that represents near-optimal robot motion paths and trajectories, usually outperforming solutions from human experts.

Our teams have tested this 100% ML-generated solution to seamlessly orchestrate four separate robots working on a scaled-down car welding application simulation. The motion plans and trajectories for each robot are auto-generated, collision free, and surprisingly efficient - performing ~25% better than some traditional methods we’ve tested.

  • Learning from demonstration, applied to two-handed dexterous manipulation: To create scalable and generalizable solutions, many assembly applications require robots to manipulate objects with two arms and grippers. The ability to gently pick up, move, precisely place, and assemble objects with human-like dexterity is highly challenging and valuable for robotics at large. From automotive body-in-white to electronics assembly and consumer services, companies across many industries are eager for the new applications that could be made possible by easier programming solutions to enable advanced dexterity for multiple robots working in unison. Below, you’ll see how one of Google DeepMind’s methods of training a model — based on human input using remote devices — benefits from Intrinsic’s enablement of real-world data, managing sensor data and high-frequency real-time controls infrastructure. This example showcases how human-instructed skills can be trained efficiently and be used effectively for solving haptically-challenging assembly tasks - similar to the ones from the Manufacturing Track of the NIST Robotic Grasping and Manipulation Competition.

Sharing a glimpse into Intrinsic’s internal AI efforts

In addition to these exciting partner ecosystem milestones, the Intrinsic R&D and perception teams have also been working on next-generation perception and deep learning capabilities, including an in-house foundation model for object detection and pose estimation. Similar in potential to the AI-enabled universal grasping and motion planning capabilities, universal object detection, and pose estimation fills another big part of the intelligent automation puzzle. With that in mind, today we’re sharing an early, promising look at our latest perception model:

  • Foundation model for perception: Enabling a robotic system to understand the next task and the physical objects involved requires a real-time, accurate, and semantic understanding of the environment. AI-enabled vision systems make intelligent robotic systems possible. We have developed a foundation model, pre-trained with 130,000+ objects, that can perform one shot object and pose detection for a variety of objects in a few seconds. This is also a universal skill, meaning it can work well with many different objects and can contend with variables in the environment, including different cameras, lighting, objects, and orientations. The model is fast, generalized, and accurate. We are working to bring this and others like it onto the Intrinsic platform as new capabilities, so that they become easy to develop, deploy, and operate.

I speak for our leadership team — Chief Technology Officer Brian Gerkey, Chief Science Officer Torsten Kroeger, and myself — and all of us at Intrinsic when I thank our colleagues at Google DeepMind Robotics, NVIDIA, and other industry partners for our close collaborations on manifesting AI and its value in the physical world. Together we’re surfacing imminently useful, practical and valuable use cases that are in high demand from our customers and partners. Robotics is AI in the physical world, and we’re excited for what’s next!

If you’ll also be at Automate, stop by our booth (#2808), demos, and speaking events to learn more. On Mon., May 6th at 1:45 p.m., Intrinsic CTO Brian Gerkey will be joining a panel on the latest advances in automation. On Thurs., May 9th at 9:00 a.m., Intrinsic CEO Wendy Tan White will deliver a keynote session about what the rise of AI means for innovation and growth.

Building openness into industrial robotics

Next Up

Building openness into industrial robotics

Gathering with the developer community at ROSCon 2023

READ MORE