Building the Next Wave of Autonomous Facilities

Industrial, physical AI-based systems are being accelerated across training, simulation and inference.

Robotic facilities result from a culmination of all of these technologies.

Manufacturers like Foxconn or logistics companies like Amazon Robotics can orchestrate teams of autonomous robots to work alongside human workers and monitor factory operations through hundreds or thousands of sensors.

These autonomous warehouses, plants and factories will have digital twins. The digital twins are used for layout planning and optimization, operations simulation and, most importantly, robot fleet software-in-the-loop testing.

Built on Omniverse, “Mega” is a blueprint for factory digital twins that enables industrial enterprises to test and optimize their robot fleets in simulation before deploying them to physical factories. This helps ensure seamless integration, optimal performance and minimal disruption.

Mega lets developers populate their factory digital twins with virtual robots and their AI models, or the brains of the robots. Robots in the digital twin execute tasks by perceiving their environment, reasoning, planning their next motion and, finally, completing planned actions.

These actions are simulated in the digital environment by the world simulator in Omniverse, and the results are perceived by the robot brains through Omniverse sensor simulation.

With sensor simulations, the robot brains decide the next action, and the loop continues, all while Mega meticulously tracks the state and position of every element within the factory digital twin.

This advanced software-in-the-loop testing methodology enables industrial enterprises to simulate and validate changes within the safe confines of the Omniverse digital twin, helping them anticipate and mitigate potential issues to reduce risk and costs during real-world deployment.

Advancements in accelerated computing and physics-based simulation, have led us to the next frontier of AI: Physical AI.

Physical AI models can perceive, understand, interact and navigate the physical world with generative AI. This new frontier of AI manifests itself through the embodiment of physical systems that go beyond a traditional AMR, robot arm or humanoid robot and instead, include everything from streetlights to data centers, healthcare facilities and manufacturing plants. With Physical AI, static systems will transform from static systems to dynamic, responsive systems.

To enable developers to build physical AI, NVIDIA has built 3 computers. NVIDIA AI and DGX to train foundation models, @NVIDIAOmniverse to simulate and enhance AIs in a physically-based virtual environment, and the NVIDIA Jetson AGX, a robot supercomputer.

The era of physical AI is here - transforming the world's heavy industries and robotics. Join the world’s leading companies and get started with NVIDIA Robotics now. 


SOFTWARE 2.0

The Three Computer Solution: Powering the Next Wave of AI Robotics - Industrial, physical AI-based systems — from humanoids to factories — are being accelerated across training, simulation and inference.

Like chatbots and image generators, this robotics technology learns its skills by analysing enormous amounts of digital data.


ChatGPT marked the big bang moment of generative AI. Answers can be generated in response to nearly any query, helping transform digital work such as content creation, customer service, software development and business operations for knowledge workers.

Physical AI, the embodiment of artificial intelligence in humanoids, factories and other devices within industrial systems, has yet to experience its breakthrough moment.

This has held back industries such as transportation and mobility, manufacturing, logistics and robotics. But that’s about to change thanks to three computers bringing together advanced training, simulation and inference.


Today, software writes software. The world’s computing workloads are shifting from general-purpose computing on CPUs to accelerated computing on GPUs, leaving Moore’s law far behind.

With generative AI, multimodal transformer and diffusion models have been trained to generate responses.

Large language models are one-dimensional, able to predict the next token, in modes like letters or words. Image- and video-generation models are two-dimensional, able to predict the next pixel.

None of these models can understand or interpret the three-dimensional world. And that’s where physical AI comes in.

Physical AI models can perceive, understand, interact with and navigate the physical world with generative AI. With accelerated computing, multimodal physical AI breakthroughs and large-scale physically based simulations are allowing the world to realize the value of physical AI through robots.

A robot is a system that can perceive, reason, plan, act and learn. Robots are often thought of as autonomous mobile robots (AMRs), manipulator arms or humanoids. But there are many more types of robotic embodiments.

In the near future, everything that moves, or that monitors things that move, will be autonomous robotic systems. These systems will be capable of sensing and responding to their environments.

Everything from surgical rooms to data centers, warehouses to factories, even traffic control systems or entire smart cities will transform from static, manually operated systems to autonomous, interactive systems embodied by physical AI.

The Rising Wave of Physical AI

“Physical AIs are models that can understand instructions and autonomously perform complex tasks in the real world,” said Huang, who is extremely optimistic about the extent to which robots will become a part of every industry.

“Everything is going to be robotic,” he said. Huang believes that there would be an entire ecosystem of robots, where all factories will orchestrate robots and those robots will build robotic products. NVIDIA is banking on Omniverse to make this happen

NVIDIA’s Omniverse, a platform designed for real-time 3D design collaboration and simulation, form the basis for digital twins which is crucial for simulation. Digital twins are the virtual replicas of physical objects or systems where robots can be tested to fit into the real world.

Showcasing a wide range of scenarios where robots have been trained on NVIDIA’s Omniverse, Huang spoke about how companies are building robotic warehouses around it.

In digital twins, factory planners optimise floor layout and line configurations and locate optimal camera placements to monitor future operations. Also referred to as a ‘robot gym’, Foxconn developers train and test NVIDIA ISAC AI applications for robotic perception and manipulation in Omniverse digital twins.

Multimodal LLMs have only accelerated the process of robotic training. “Multimodal LLMs are breakthroughs that enable robots to learn, perceive and understand the world around them, and plan how they’ll act,” said Huang.

Clubbing this technique with human demonstrations, robots can acquire the skills needed to interact with the world using gross and fine motor skills.

Robotics Race Continues

The robotics race is only gaining steam with all the recent advancements. Big tech companies have aggressively invested in robotics companies over the last few years. Figure 01, the humanoid built by deep tech robotics company Figure AI, is backed by some of the biggest players like NVIDIA, Microsoft, Jeff Bezos, and others.



Next
Next

Amazon is Mastering Delivery Through AI Innovation