NVIDIA adds Cosmos Policy to its world foundation models
Cosmos Policy represents an early step toward adapting world foundation models for robot control and planning, NVIDIA says. | Source: NVIDIA
NVIDIA Corp. is continuously expanding its NVIDIA Cosmos world foundation models, or WFMs, to tackle problems in robotics, autonomous vehicle development, and industrial vision AI. The company recently introduced Cosmos Policy, its latest research on advancing robot control and planning using Cosmos WFMs.
Cosmos Policy is a new robot control policy that post-trains the Cosmos Predict-2 world foundation model for manipulation tasks. It directly encodes robot actions and future states into the model, achieving state-of-the-art (SOTA) performance on LIBERO and RoboCasa benchmarks, said NVIDIA.
The company obtained Cosmos Policy by fine-tuning Cosmos Predict, a WFM trained to predict future frames. Instead of introducing new architectural components or separate action modules, Cosmos Policy adapts the pretrained model directly through a single stage of post-training on robot demonstration data.
The NVIDIA researchers defined a policy as the system’s decision-making brain that maps observations (such as camera images) to physical actions (like moving a robotic arm) to complete tasks.
What’s different about Cosmos Policy?
The breakthrough of Cosmos Policy is how it represents data, explained NVIDIA. Instead of building separate neural networks for the robot’s perception and control, it treats robot actions, physical states, and success scores just like frames in a video.
All of these are encoded as additional latent frames. These are learned using the same diffusion process as video generation, allowing the model to inherit its pre-learned understanding of physics, gravity, and how scenes evolve over time. “Latent” refers to the compressed, mathematical language a model uses to understand data internally (rather than raw pixels).
As a result, a single model can:
- Predict action chunks to guide robotic movement using hand-eye coordination (i.e., visuomotor control)
- Predict future robot observations for world modeling
- Predict expected returns (i.e. value function) for planning
All three capabilities are learned jointly within one unified model. Cosmos Policy can be deployed either as a direct policy, where only actions are generated at inference time, or as a planning policy, where multiple candidate actions are evaluated by predicting their resulting future states and values.
More about Cosmos Predict
Recent work in robotic manipulation has increasingly relied on large pretrained backbones to improve generalization and data efficiency, NVIDIA noted. Most of these approaches build on vision-language models (VLMs) trained on large-scale image–text datasets and fine-tuned to predict robot actions.
These models learn to understand videos and describe what they see, but they do not learn how to physically perform actions. A VLM can suggest high-level actions like “Turn left” or “Pick up the purple cup,” but it does not know how to carry them out precisely.
In contrast, WFMs are trained to predict how scenes evolve over time and generate temporal dynamics with videos. These capabilities are directly relevant to robot control, where actions must account for how the environment and the robot’s own state change over time.
Cosmos Predict is trained for physical AI using a diffusion objective over continuous spatiotemporal latents, enabling it to model complex, high-dimensional, and multimodal distributions across long temporal horizons.
NVIDIA said this design makes Cosmos Predict a suitable foundation for visuomotor control:
- The model already learns state transitions through future-frame prediction.
- Its diffusion formulation supports multimodal outputs, which is critical for tasks with multiple valid action sequences.
- The transformer-based denoiser can scale to long sequences and multiple modalities.
Cosmos Policy is built on post-trained Cosmos Predict2 to generate robot actions alongside future observations and value estimates, using the model’s native diffusion process. This allows the policy to fully inherit the pretrained model’s understanding of temporal structure and physical interaction while remaining simple to train and deploy.
Inside the early results
Cosmos Policy is evaluated across simulation benchmarks and real-world robot manipulation tasks, comparing against diffusion-based policies trained from scratch, video-based robot policies, and fine-tuned vision-language-action (VLA) models.
Cosmos Policy is evaluated on LIBERO and RoboCasa, two standard benchmarks for multi-task and long-horizon robotic manipulation. On LIBERO, Cosmos Policy consistently outperforms prior diffusion policies and VLA-based approaches across task suites, particularly on tasks that require precise temporal coordination and multi-step execution.
| Model | Spatial SR (%) | Object SR (%) | Goal SR (%) | Long SR (%) | Average SR (%) |
|---|---|---|---|---|---|
| Diffusion Policy | 78.3 | 92.5 | 68.3 | 50.5 | 72.4 |
| Dita | 97.4 | 94.8 | 93.2 | 83.6 | 92.3 |
| π0 | 96.8 | 98.8 | 95.8 | 85.2 | 94.2 |
| UVA | -- | -- | -- | 90.0 | -- |
| UniVLA | 96.5 | 96.8 | 95.6 | 92.0 | 95.2 |
| π0.5 | 98.8 | 98.2 | 98.0 | 92.4 | 96.9 |
| Video Policy | -- | -- | -- | 94.0 | -- |
| OpenVLA-OFT | 97.6 | 98.4 | 97.9 | 94.5 | 97.1 |
| CogVLA | 98.6 | 98.8 | 96.6 | 95.4 | 97.4 |
| Cosmos Policy (NVIDIA) | 98.1 | 100.0 | 98.2 | 97.6 | 98.5 |
On RoboCasa, Cosmos Policy can achieve higher success rates than baselines trained from scratch, demonstrating improved generalization across diverse household manipulation scenarios.
| Model | # Training Demos per Task | Average SR (%) | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| GR00T-N1 | 300 | 49.6 | ||||||||||
| UVA | 50 | 50.0 | ||||||||||
| DP-VLA | 3000 | 57.3 | ||||||||||
| GR00T-N1 + DreamGen | 300 (+10000 synthetic) | 57.6 | ||||||||||
| GR00T-N1 + DUST | 300 | 58.5 | ||||||||||
| UWM | 1000 | 60.8 | ||||||||||
| π0 | 300 | 62.5 | ||||||||||
| GR00T-N1.5 | 300 | 64.1 | ||||||||||
| Video Policy | 300 | 66.0 | ||||||||||
| FLARE | 300 | 66.4 | ||||||||||
| GR00T-N1.5 + HAMLET | 300 | 66.4 | ||||||||||
| Cosmos Policy (NVIDIA) | 50 | 67.1 |
In both benchmarks, initializing from Cosmos Predict provides a significant performance advantage over training equivalent architectures without video pretraining, said the NVIDIA researchers.
When deployed as a direct policy, Cosmos Policy already matches or exceeds state-of-the-art performance on most tasks. When enhanced with model-based planning, the researchers said they observed a 12.5% higher task completion rate on average in two challenging real-world manipulation tasks.
Cosmos Policy is also evaluated on real-world bimanual manipulation tasks using the ALOHA robot platform. The policy can successfully execute long-horizon manipulation tasks directly from visual observations, said NVIDIA.
The post NVIDIA adds Cosmos Policy to its world foundation models appeared first on The Robot Report.