For robots to be truly helpful, they need to understand the physical world like we do. That’s why today we're introducing Gemini Robotics-ER 1.6, an upgrade to our reasoning-first model that enables robots to understand their environments with unprecedented precision.
By enhancing spatial logic and multi-view understanding, we’re bringing a new level of autonomy to the next generation of physical agents. This model specializes in capabilities critical for robotics, including visual and spatial understanding, task planning and success detection.
We’re also helping robots with instrument reading, a new capability to enable robots to read complex gauges and sight glasses — a capability discovered through collaboration with Boston Dynamics. Gemini Robotics-ER 1.6 is our safest robotics model to date, demonstrating superior compliance with safety policies on adversarial spatial reasoning tasks.
Starting today, Gemini Robotics-ER 1.6 is available to developers via the Gemini API and Google AI Studio. Read more on the Google DeepMind blog.