AgiBot Unveils Universal Embodied Foundation Model—Genie Operator-1 (GO-1)
Shanghai, China, March 11, 2025 — On March 10, AgiBot officially launches a universal embodied foundation model, Genie Operator-1 (GO-1). This groundbreaking foundation model introduces the Vision-Language-Latent-Action (ViLLA) framework, composed of a Visual-Language Model (VLM) and a Mixture-of-Experts (MoE) system. It boasts key advantages, such as learning from human videos, few-shot generalization, cross-embodiment adaptation, and continuous self-evolution.
For the broader robotics and AI ecosystem, GO-1 marks a paradigm shift—moving beyond rigid, task-specific models to a truly flexible, learning-driven AI that can be deployed across multiple domains. It sets a new standard for universal intelligence in robotics, accelerating the transition from narrowly specialized machines to general-purpose AI-powered robots, capable of reshaping workflows, boosting productivity, and unlocking new frontiers in automation.
The VLM uses vast amounts of internet-based text-image data to develop general scene perception and language understanding capabilities. The latent planner within the MoE leverages large datasets of cross-embodiment and human video data to gain general motion understanding, while the action expert model within the MoE utilizes millions of real-world data points to achieve fine-tuned action execution. Together, these components work in harmony, pushing embodied intelligence to new heights.
With AgiBot GO-1, robots can take over real-world tasks—whether it’s assisting with household chores, automating office workflows, streamlining industrial operations, or enhancing customer service. This means businesses and individuals can boost efficiency, cut costs, and unlock new possibilities in automation. It excels in generalization across different environments and objects, adapting to new tasks, learning new skills, and evolving its capabilities.
In addition to its robust generalization ability, the robot can quickly adapt to new tasks with only minimal data, significantly reducing retraining costs. Data collected from different types of embodiments, including human operation videos, can be efficiently utilized by the model, helping each other and reducing redundancy in data collection.
Based on GO-1, robots can rapidly deploy and operate in real-world scenarios. Its robustness in adapting to complex environments, cost-effective cross-scenario and cross-task transfer, and self-evolving capabilities through data flywheel make it capable of completing various tasks in dynamic environments. These features can help robots have a significant impact in modern society’s diverse scenarios. These characteristics can be summarized into 5 main aspects:
• Learning from Human Videos:GO-1 can learn from internet videos and real human demonstrations to enhance its understanding of human actions.
• Few-Shot Generalization:GO-1’s strong generalization ability enables fast adaptation to new scenes and tasks with minimal data, even in zero-shot scenarios, resulting in very low post-training costs.
• Cross-Embodiment Adaptation:GO-1 is a generalist robot policy model, capable of transferring between different kinds of robots and quickly adapting to various embodiments.
• Continuous Self-Evolution:GO-1 can continuously evolve from data generated by issues encountered during real-world execution, within AgiBot’s complete data feedback system.
AgiBot GO-1 Breaks Through Embodied Model Application Bottlenecks and Unlocking New Possibilities
In recent years, embodied AI has made rapid and significant progress, but there are still several challenges faced by current embodied models, making their practical application difficult.
These challenges are evident in:
· Narrow Skillsets: Most models are trained for specific skills and cannot quickly learn new skills. Their ability to generalize to new scenes or objects is limited, and they often perform poorly when faced with unfamiliar environments or objects.
· Limited Language Understanding: Many existing models are small-scale and lack language comprehension, preventing them from generalizing commands effectively.
· Single-Robot Deployment: Most models are designed for a specific robot entity, making it difficult to utilize cross-entity data efficiently and deploy across various robot types.
The launch of GO-1 overcomes these challenges, providing powerful cognitive support for robots to perform tasks across various domains of work and life. From household tasks like preparing meals and clearing tables, to common office and business tasks like welcoming guests and distributing items, to industrial operations and more, the universal embodied foundation model enables robots to handle a wide range of tasks efficiently.
The release of AgiBot’s universal embodied foundation Model marks a rapid move towards the generalization, openness, and intelligence of embodied AI:
· From Single Tasks to Multiple Tasks: Robots can now execute a variety of tasks in different environments without needing to retrain for each new task.
· From Closed Environments to the Open World: Robots are no longer limited to labs and can adapt to the dynamic real-world environment.
· From Preset Programs to Instruction Generalization: Robots now understand natural language commands and can perform reasoning based on semantics, moving beyond preset programs.
AgiBot’s universal embodied foundation model will accelerate the spread of embodied intelligence. Robots will evolve from task-specific tools into autonomous entities with general intelligence, playing a greater role in industries like manufacturing, services, healthcare, logistics, and home applications. This universal foundation model will also propel robots to new levels of intelligence, reshaping the robotics ecosystem and leading to a future of more general, all-purpose intelligent systems.
Explore how GO-1 can transform industries and revolutionize robotics. Visit AgiBot’s website to learn more
https://www.facebook.com/profile.php?id=61571059866465
https://www.linkedin.com/company/agibot/
https://www.youtube.com/channel/UCuKcqTxz_fe1PbrsIAQXr5A
https://www.tiktok.com/@agibot_
Contact Info:
Name: William Peng
Email: Send Email
Organization: AgiBot
Website: https://www.agibot.com/
Release ID: 89154893
In case of identifying any problems, concerns, or inaccuracies in the content shared in this press release, or if a press release needs to be taken down, we urge you to notify us immediately by contacting error@releasecontact.com (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). Our dedicated team will be readily accessible to address your concerns and take swift action within 8 hours to rectify any issues identified or assist with the removal process. We are committed to delivering high-quality content and ensuring accuracy for our valued readers.