Google wants robots to generate their own code
There are countless big problems left to solve in the world of automation, and robotic learning sits somewhere near the top. While it’s true that humans have gotten pretty good at programming systems for specific tasks, there’s a big, open-ended question of: and then what?Get more news about Coding Robot Agency,you can vist our website!
New research demonstrated at Google’s AI event in New York City this morning proposes the notion of letting robotic systems effectively write their own code. The concept is designed to save human developers the hassle of having to go in and reprogram things as new information arises.
The company notes that existing research and trained models can be effective in implementing the concept. All of that work can prove foundational in developing systems that can continue to generate their own code based on objects and scenarios encountered in the real world. The new work on display today is Code as Policies (CaP).
The system, as described, also relies on third-party libraries and APIs to best generate the code suited to a specific scenario — as well as support for languages and (why not?) emojis. The information accessible in those APIs are one of the existing limitations at present. The researchers note, “These limitations point to avenues for future work, including extending visual language models to describe low-level robot behaviors (e.g., trajectories) or combining CaPs with exploration algorithms that can autonomously add to the set of control primitives.”
As part of today’s announcement, Google is releasing open source versions of the code accessible through its GitHub site to build on the research it’s thus far presented. So, you know, all of the caveats about early-stage research stuff here.