
RT-2 is a vision-language model which enhances robots' abilities to recognise visual and linguistic patterns.
Share Post

RT-2 is a vision-language model which enhances robots' abilities to recognise visual and linguistic patterns.
Google has begun utilising transformer Artificial Intelligence (AI), built on large language models (LLMs), to train robots that can operate without complicated instructions. The technology in use is akin to the one that forms the foundation for Google's Bard chatbot, a competitor to OpenAI’s ChatGPT. The tech giant's recent development, Robotic Transformer (RT-2), is a unique AI learning model.
RT-2 is a vision-language model which enhances robots' abilities to recognise visual and linguistic patterns. This improvement aids robots in better interpreting instructions and selecting suitable objects based on the request. In an office-kitchen environment, Google researchers trialled the RT-2 with a robotic arm, instructing it to identify an effective improvised hammer and select a drink for an exhausted person.
Interestingly, the robotic arm, dubbed 'SWIFTIE', was also directed to move a coke can to a picture of pop star Taylor Swift. The newly developed model is trained on a combination of web data, robotics data, and research advances in LLMs like Bard. Beyond English, it can understand multiple other languages.
The quest to equip robots with enhanced inferencing abilities for real-world environments has been a long-standing goal for researchers. In reality, robots need detailed instructions to perform tasks that humans find instinctive, like picking up a glass. Prior approaches involved lengthy training periods for robots where researchers had to program directions individually. With vision language models, however, robots now have access to a broader set of information to infer the next action.
Google's exploration in this field commenced last year when it introduced an LLM in robotics, creating the PaLM-SayCam system, which effectively integrated LLM with physical robotics. While Google's new robot isn't flawless, it marks another instance of AI becoming increasingly integrated into everyday life.
This development follows Tesla's recent showcase of its Tesla bot. The company envisages this bot to be a simple, daily-use robot utilising computer vision technology, similar to the one employed in its AutoPilot Advanced Driver Assistance System (ADAS).
Tata Punch EV Launch Tomorrow: Features, Specs, Price Expected
Acko Drive Team 19 Feb, 2026, 6:14 PM IST
JSW Begins Independent Journey in India’s Auto Industry with CVs
Acko Drive Team 19 Feb, 2026, 5:03 PM IST
Volkswagen Tayron R-Line Launched In India, Priced At ₹46.99 Lakh
Acko Drive Team 19 Feb, 2026, 2:52 PM IST
iOS 26.4 CarPlay Public Beta Previews Video Functionality, Conversational AI Apps
Acko Drive Team 19 Feb, 2026, 2:36 PM IST
Audi SQ8 India Launch on March 17: What to expect
Acko Drive Team 19 Feb, 2026, 2:14 PM IST
Looking for a new car?
We promise the best car deals and earliest delivery!
