Autonomous AI at Scale: Adobe Agents Unlock Breakthrough Creative Intelligence With NVIDIA and WPP

AI agents are transforming how work gets done across all industries, accelerating everything from content creation to decision-making.

NVIDIA’s expanded strategic collaborations with Adobe and WPP are bringing agentic AI to the center of enterprise marketing operations across creative production and customer experience orchestration. 

As demand for personalized customer experiences surges, brands require intelligent systems that can plan, create, produce and activate content continuously — without compromising control, governance or brand integrity.

Consider a global retailer delivering the right offer, image, copy and price, across millions of product, audience and channel combinations — updated in minutes instead of months. 

For marketing and creative teams, that means moving from one-size-fits-all campaigns to tailored experiences that are always on, always relevant and on brand. All of it is powered by intelligent systems that continuously generate and deliver content without sacrificing control, governance or brand integrity.

The expanded collaborations bring together three complementary strengths: Adobe’s creative and customer experience platforms and the new Adobe CX Enterprise Coworker, WPP’s global media and marketing expertise, and NVIDIA’s accelerated computing and software stack, including NVIDIA Nemotron open models, NVIDIA Agent Toolkit and the NVIDIA OpenShell secure runtime for building and running secure agentic AI systems.

As these agents begin orchestrating multistep workflows, tapping sensitive data and triggering actions across marketing stacks, enterprises need a way to enforce clear rules of engagement so every operation remains compliant, on brand and within defined risk boundaries.

Powered by the NVIDIA OpenShell runtime, every agent operates within a secure, isolated environment, delivering enterprise-grade control, consistency and auditability across the entire marketing lifecycle, with verifiable policy management, answering the question, “What can the agent do?” and not just, “What policy is in place?” 

In governed environments, enterprises can also keep key workflows and intelligence services inside their trust boundary, including securely invoking Adobe CX Intelligence as part of customer experience agents.

A live demo of CX Enterprise Coworker — powered by NVIDIA Agent Toolkit, including the OpenShell runtime and Nemotron models — will be featured during Adobe Summit’s day-two keynote taking place Tuesday, April 21, at 9 a.m. PT.

The collaboration enables:

Creative Intelligence Meets Performance Intelligence With Policy-Governed Agents

Governed environments such as the ones enabled by this collaboration act as a set of “guardrails” that keep AI operations observable and auditable, preventing the system from acting outside of a company’s specific data boundaries or brand rules.

By combining Adobe’s creative platforms, WPP’s media and marketing expertise and NVIDIA’s secure infrastructure with CX Enterprise Coworker, brands no longer have to choose between speed and safety. Autonomous agents can now generate, adapt and activate content at scale while operating within governed, policy-driven environments.

The result is a new foundation for agentic marketing — where creative intelligence, performance and trust are built in from the start and delivered at global scale.

Watch NVIDIA founder and CEO Jensen Huang’s Adobe Summit fireside chat with Adobe CEO Shantanu Narayen, taking place today at 2:20 p.m. PT.

NVIDIA and Partners Showcase the Future of AI-Driven Manufacturing at Hannover Messe 2026

Manufacturing is at an inflection point. Across every major industrial economy, the pressure to do more with less — due to faster design cycles, leaner operations and strain on skilled labor pools — is accelerating the shift to AI-driven production. 

The question is no longer whether to adopt AI, but how fast and at what scale. 

At Hannover Messe 2026, running April 20-24 in Hannover, Germany, NVIDIA and its partners are demonstrating AI-driven manufacturing in action. Attendees will experience how advancements in accelerated computing, AI physics, agents and robotics are powering industrial innovation — from agentic design and engineering to real-time simulation, vision AI agents and humanoid robots operating in factories. 

The factory of the future isn’t just a concept. It’s being built now.

AI Infrastructure: Powering Europe’s Next Industrial Era

Running AI at scale across the factories and supply chains that manufacturing output relies on requires the right underlying infrastructure. As AI becomes foundational to how products, processes and facilities are designed, built and optimized, manufacturers need a unified, sovereign foundation that’s secure, scalable and built for industrial scale.

The Industrial AI Cloud, one of Europe’s largest AI factories built in Germany by Deutsche Telekom on NVIDIA AI infrastructure, is a blueprint for the future. It provides a secure, sovereign foundation for accelerating AI and robotics across Europe’s industries. 

At the show, industry leaders, including Agile Robots, SAP, Siemens, PhysicsX and Wandelbots, will share how they are using this sovereign AI platform to run AI-accelerated workloads ranging from AI physics-driven, real-time simulation to factory-scale digital twins and software-defined robotics. EDAG, a leading independent engineering service provider, also announced it will be running its industrial metaverse platform, metys, on the Industrial AI Cloud — bringing sovereign AI infrastructure to automotive and industrial engineering at scale.

To support the increasing demand for AI infrastructure, Dell Technologies, IBM, Lenovo and PNY are also showcasing NVIDIA-accelerated systems, from the edge to data centers, enabling manufacturers to run faster simulations and develop and deploy computer vision, AI agents and robotics in production at scale.

AI-Driven Engineering

As industrial systems grow more complex, the software that engineers rely on to design, simulate and test them is being transformed with AI physics and agentic AI to keep pace. At Hannover Messe, NVIDIA partners are showcasing how AI-accelerated design and simulation is unlocking new possibilities.

Cadence, Dassault Systèmes, Siemens and Synopsys are integrating NVIDIA CUDA-X, AI physics and NVIDIA Omniverse libraries, as well as NVIDIA Nemotron open models, across their software — enabling real-time, physics-grounded simulation, AI-powered design exploration and agentic workflows that empower engineers.

Real-Time Factory Simulation

Factory-scale digital twins are critical for unlocking process simulation, real-time operations, and the testing and orchestration of robot fleets. At Hannover Messe, partners across manufacturing, energy and automotive are showing how digital twins, built on Omniverse libraries and OpenUSD, enable their customers to design, stress-test and continuously optimize their operations.

ABB will showcase how the integration of NVIDIA Omniverse libraries and Microsoft Azure cloud services into its ABB Genix Industrial IoT and AI Suite enables operations teams to understand asset performance in full context and engage AI agents to accelerate root-cause analysis.

Dassault Systèmes will demonstrate how AI-driven factories of the future are powered by virtual twin experiences. Attendees will see how these virtual twins harness NVIDIA physical AI libraries to enable autonomous, software-defined production and smarter, agile manufacturing systems.

Kongsberg Digital will highlight how integrating NVIDIA Omniverse libraries into its Kognitwin platform delivers spatial intelligence across critical energy infrastructure. The combination of digital twin models, live operational data and AI agents enables its customers to analyze complex assets, test scenarios virtually and optimize performance before changes reach the physical world.

Microsoft is demonstrating how NVIDIA Omniverse libraries integrated with Microsoft Fabric Real-Time Intelligence and IQ enable physically accurate, real-time simulations for organizations to design, simulate and optimize physical systems, while the Azure Physical AI Toolchain — built on the NVIDIA Physical AI Data Factory Blueprint — accelerates the deployment of physical AI and autonomous robots into production.

Siemens will highlight how integrating NVIDIA Omniverse libraries into its Digital Twin Composer solution turns multi-domain engineering and operational data into a comprehensive, simulation-ready digital twin — helping its customers deliver throughput gains and identify production issues before physical changes.

By combining the Wandelbots NOVA Platform with Omniverse libraries such as NVIDIA Omniverse NuRec, Wandelbots highlights a powerful pathway to digitalize real-world facilities into physically accurate simulations. For solutions like Gessmann’s GESSbot robots, this opens up future opportunities to accelerate commissioning and reduce deployment risks across complex industrial sites.

Bringing AI Agents to the Factory Floor

Traditional AI answers problems under a rigid set of conditions. AI agents bring a new level of proactive and adaptive intelligence that provides the context on what’s seen and analyzes what’s happening before taking action. 

At the show, attendees will see how vision AI agents built on NVIDIA Metropolis libraries along with Nemotron and NVIDIA Cosmos open models are transforming industrial operations, combining multiple data streams with existing camera infrastructure to reach new levels of quality control, operational efficiency and worker safety.

Invisible AI is launching its Vision Execution System, a vision AI system that uses agents to capture, structure and analyze every production cycle on the factory floor in real time. Built with the NVIDIA Metropolis VSS Blueprint and NVIDIA Cosmos Reason 2 and Nemotron models, these autonomous AI agents surface actionable insights directly to operators before issues compound. This class of production intelligence is already driving measurable gains at some of the world’s largest automotive manufacturing factories like Toyota.

Tulip Interfaces will showcase Factory Playback, which uses the VSS blueprint and Cosmos Reason 2 to synchronize machine telemetry, operator workflows, quality events and video into a searchable, contextualized timeline of operations. Terex, a global industrial equipment manufacturer with over 40 plants, uses the platform to gain valuable insights and is expected to achieve an estimated 3% increase in yield and 10% reduction in rework.

Fogsphere extends vision AI into some of the most demanding manufacturing and industrial environments. Its Vision Agent platform — now supporting ARM-based edge deployment and training workflows built on NVIDIA Cosmos Reason 2 and the Metropolis VSS Blueprint — enables its customers to build and finetune visual AI agents. Saipem, an engineering services company in the energy and industrial ecosystem, is using the platform to build agents that can detect and respond in real time to high-risk safety and environmental events. 

Machines That Can Think

AI reasoning is breaking industrial robots free from single-task constraints and time-consuming reprogramming, giving them the ability to navigate unstructured environments, learn new tasks and act autonomously. At Hannover Messe, NVIDIA partners are demonstrating robots completing real production tasks and physical AI frameworks that put autonomous automation within reach of manufacturers of every size.

At a Siemens blueprint autonomous electronics factory in Erlangen, Germany, Humanoid’s HMND 01 wheeled humanoid — running the NVIDIA Jetson Thor edge AI module for on-robot compute and developed using Isaac Sim and Isaac Lab open frameworks for simulation and reinforcement learning — has completed autonomous logistics operations in a first proof of concept within the production environment. Humanoid’s simulation-first development compressed what typically takes up to two years of hardware development down to just seven months.

SCHUNK’s GROW automation cell brings physical AI into production in a standardized, deployable form. NVIDIA Omniverse libraries and Isaac simulation frameworks enable robot behavior to be simulated, trained and validated before the cell goes live. Wandelbots’ NOVA platform connects simulation to the shop floor for continuous refinement, while EY designs the operating model to scale it across Europe’s small- and medium-sized enterprises.

Using NVIDIA’s physical AI stack, including the Physical AI Data Factory Blueprint and NVIDIA IGX Thor for industrial-grade edge compute with functional safety, Hexagon Robotics is accelerating robot training, validation and deployment. The results are already taking shape, with AEON set to perform assembly operations at a BMW Plant in Leipzig — marking one of the first humanoid deployments in a German production environment.

QNX has expanded its collaboration with NVIDIA to power safety‑critical edge AI systems for robotics, medical and industrial applications, with QNX OS for Safety 8.0 now integrated on NVIDIA IGX Thor and the NVIDIA Halos safety stack.

Explore NVIDIA AI technologies for industrial and manufacturing by joining NVIDIA at Hannover Messe.

No Need for Space Gear — Capcom’s ‘PRAGMATA’ Joins GeForce NOW on Launch Day

Head straight for orbit with GeForce NOW — no space helmet required. 

PRAGMATA, Capcom’s long-awaited sci-fi action adventure, touches down on GeForce NOW the same day it launches worldwide. The futuristic journey through a cold lunar station in the near future can be streamed instantly from the cloud to almost any device, no console or heavy hardware needed.

That’s only the beginning. Five new titles join the cloud this week, expanding April’s gaming galaxy with fresh adventures and endless possibilities. 

Plus, the GeForce NOW Ultimate membership comes to gamers in India for the first time, with the service now available in beta and operated by NVIDIA.               

Time to see what’s landing on GeForce NOW.

A Mission Gone Wrong

PRAGMATA is Capcom’s newest sci-fi action adventure that blends heart, high-tech and a hauntingly quiet world set in the near future. Step into the boots of Hugh Williams, an investigator navigating a lunar research station gone silent and Diana, a young android. Armed with an arsenal of weapons and the ability to hack, every corridor and console becomes part of a cinematic experience filled with tense exploration and fast-paced action.

The story unfolds amid the cold vacuum of the moon after a massive quake hits the station researching Lunafilament — a material said to be able to create anything given enough data. Awake, injured and disoriented, Hugh crosses paths with Diana, the mysterious android girl known as a Pragmata. Now, they must work together as they face the rogue station on their way back to Earth.

PRAGMATA shines in stunning clarity with ray-traced lighting and NVIDIA DLSS 4 technology boosting frame rates and image quality. Stream it on launch day at full fidelity, even without the latest hardware — no need to wait on a large install or worry about hardware specs. Hugh and Diana’s lunar mystery is ready when the moment strikes.

Let’s Play Today

fortnite save the world on geforce now
Heroes in the cloud don’t have to wait for updates.

Fortnite: Save the World is now free and ready to stream instantly on GeForce NOW. The storm hits hard and the heroes hit harder — jump into a co-op adventure that mixes base-building, looting and all-out action against waves of Husks. Craft the ultimate fort, set sneaky traps and team up to protect what’s left of the world — no waiting for updates or patches, just pure fight-and-build mayhem. The storm’s closing in, but thanks to the cloud, the party’s jumping right into the action. “Save the World” isn’t available on mobile devices, including tablets.

In addition, members can look for the following:

What are you planning to play this weekend? Let us know on X or in the comments below.

Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters

Traditional data centers only stored, retrieved and processed data. In the generative and agentic AI era, these facilities have evolved into AI token factories. With AI inference becoming their primary workload, their primary output is intelligence manufactured in the form of tokens. 

This transformation demands a corresponding shift in how the economics of AI infrastructure, including total cost of ownership (TCO), is assessed. Enterprises evaluating AI infrastructure still too often focus on peak chip specifications, compute cost or floating point operations per second for every dollar spent, aka FLOPS per dollar. 

The distinction that matters is this:

The first two are merely input metrics. Optimizing for inputs while the business runs on output is a fundamental mismatch. 

Cost per token determines whether enterprises can profitably scale AI. It’s the one TCO metric that directly accounts for hardware performance, software optimization, ecosystem support and real-world utilization — and NVIDIA delivers the lowest cost per token in the industry. 

What Are the Factors That Lower Token Cost?

Understanding how to optimize token cost requires looking at the equation for calculating cost per million tokens.

An equation describing how to calculate cost per million tokens. Cost per million tokens = [cost per GPU per hour / (tokens per GPU per second x 60 seconds x 60 minutes) ] x 1 million.

In this equation, many enterprises evaluating AI infrastructure focus on the numerator: the cost per GPU per hour. For cloud deployments, this is the hourly rate paid to a cloud provider; for on-premises deployments, it’s the effective hourly cost derived from amortizing owned infrastructure. The real key to reducing token cost, however, lies in the denominator: maximizing the delivered token output.

That denominator carries two business implications.

So focusing only on the numerator means missing what drives the denominator. Think of it as an “inference iceberg”: The numerator sits above the surface, visible and easy to compare. The denominator is everything beneath the surface, which represents key factors that determine real-world token output. Accurately evaluating AI infrastructure starts with asking what lies beneath. 

Image describing the "inference iceberg." The top of the iceberg is characterized by peak chip specifications such as FLOPS and high-bandwidth memory (cost per GPU per hour, FLOPS per dollar). The bottom of the iceberg is characterized by extreme codesign across compute, networking, software, memory, storage, software and ecosystem (cost per token, tokens per watt).

Every one of these algorithmic, hardware and software optimizations must be active and integrated, or the denominator collapses. A “cheaper” GPU that delivers significantly fewer tokens per second results in a much higher cost per token. AI infrastructure that gets it right across the full stack ensures that every optimization enhances the others.

Why Does Cost per Token Matter Much More Than FLOPS per Dollar?

The following data for the DeepSeek-R1 AI model demonstrates the difference between theoretical and actual business outcomes.

Looking at compute cost alone, the NVIDIA Blackwell platform appears to cost roughly 2x more than NVIDIA Hopper — but compute cost says nothing about the output that investment buys. An analysis of mere FLOPS per dollar suggests a 2x NVIDIA Blackwell advantage compared with the NVIDIA Hopper architecture. However, the actual outcome is orders of magnitude different: Blackwell delivers more than 50x greater token output per watt than Hopper, resulting in nearly 35x lower cost per million tokens. 

Metric NVIDIA Hopper (HGX H200) NVIDIA Blackwell (GB300 NVL72) NVIDIA Blackwell Relative to Hopper
Cost per GPU per Hour ($) $1.41  $2.65 2x
FLOP per Dollar (PFLOPS)  2.8 5.6 2x
Token Output per GPU 90 6K 65x
Token Output per MW 54K 2.8M 50x
Cost per Million Tokens ($) $4.20 $0.12 35x lower

Note: Data is sourced from NVIDIA analysis and the SemiAnalysis InferenceX v2 benchmark. 

This massive divergence proves NVIDIA Blackwell delivers a massive leap in business value over the earlier Hopper generation that far outpaces any increase in system cost.

How to Choose the Right AI Infrastructure

Comparing AI infrastructure based on compute cost or theoretical FLOPS per dollar isn’t just insufficient; it doesn’t provide an accurate representation of inference economics. As the data demonstrates, an accurate evaluation of AI infrastructure’s revenue potential and profitability requires a shift from input metrics to cost per token and delivered token output.

NVIDIA delivers the industry’s lowest token cost and highest token throughput through extreme codesign across compute, networking, memory, storage, software and partner technologies. Moreover, the constant optimization of open source inference software such as vLLM, SGLang, NVIDIA TensorRT-LLM and NVIDIA Dynamo built on the NVIDIA platform means that on existing NVIDIA infrastructure, token output continues to increase and the cost per token continues to decline long after it’s acquired.

Leading cloud providers and NVIDIA cloud partners are already delivering this advantage at scale. Partners such as CoreWeave, Nebius, Nscale and Together AI have deployed NVIDIA Blackwell infrastructure and optimized their stacks to bring enterprises the lowest token cost available today, with the full benefit of NVIDIA’s hardware, software and ecosystem codesign behind every interaction served.

New Adobe Premiere Color Grading Mode Accelerated on NVIDIA GPUs

The NAB Show 2026 trade show, running April 18-22 in Las Vegas, is set to showcase a wave of new features and optimizations for top video editing applications. Bringing together over 60,000 content professionals from across the broadcast and media and entertainment industries, the event highlights how video editors, livestreamers and professional creators are exploring new tools, accelerated by NVIDIA RTX technology, to enhance and streamline their creative workflows.

At the show, Adobe is announcing a new Adobe Premiere Color Mode in beta. 

Designed to function as a dedicated grading environment nested directly within Premiere, it offers a clean, responsive interface that lets editors stay in their creative flow rather than relying on external tools for color correction. Tapping into GPU acceleration on NVIDIA GeForce RTX- and NVIDIA RTX PRO-equipped systems, this streamlined workflow, operating in 32-bit color depth for the first time, delivers significantly faster performance and quality.

NVIDIA also launched a new update to NVIDIA Project G-Assist — an experimental AI assistant that helps tune, control and optimize GeForce RTX systems. 

Color Meets Compute

Premiere’s Color Mode is a new clean, responsive interface within Adobe Premiere that enables editors to do color grading on native videos. Every element is designed to guide editors through the grading process without distractions. A large program monitor anchors the experience, providing immediate visual feedback as adjustments are made to enable faster decision-making and more precise control.

A clip grid view allows editors to visualize progression across shots in a sequence. This makes it easier to maintain consistency across scenes and ensure a cohesive look throughout a project. 

Controls are organized into focused modules, each tailored to a specific aspect of color grading. Multiple modules can be active simultaneously, giving editors flexibility while maintaining clarity. Each control features a unique heads-up display (HUD), providing contextual guidance without cluttering the interface.

Color grading is one of the most computationally intensive tasks in post-production. Every adjustment — bidirectional controls, multi-zone tonal shaping and stacked color operations — runs on NVIDIA GPUs, accelerating playback, iteration and visual feedback.

Editors can work with up to six luminance adjustment zones, moving beyond traditional highlights, midtones and shadows models. This allows for more nuanced tonal control and finer adjustments across the image. 

Visual scopes are context-aware, dynamically adapting based on the selected tool. HUD overlays provide visual cues directly within the scopes, helping editors understand how their adjustments affect the image without needing to interpret complex visual scopes and graphs.

The entire system now operates in 32-bit color depth precision, delivering maximum color fidelity and preventing unwanted clipping. Editors retain full control, with the ability to clip colors intentionally when needed for creative effect. Color styles can also be applied flexibly, at the sequence, clip, reel or custom group level, making it easier to manage looks across complex projects.

Download the Adobe Premiere (beta) to get started with Color Mode. 

Project G-Assist: Enhanced Recommendations and Controls 

The NVIDIA Project G-Assist on-device AI assistant helps users get the most out of their hardware. Today’s update adds an advanced detection system for gaming settings, as well as an enhanced knowledge system, enabling G-Assist to deliver higher accuracy when providing advice or adjusting settings for esports and AAA gaming.

The assistant can also now control more settings across systems. It can configure advanced RTX features from the NVIDIA App, including NVIDIA DLSS Overrides, Smooth Motion, RTX HDR, Digital Vibrance and encoder settings.

Download Project G-Assist v0.2.1 from the NVIDIA App and the Stream Deck plug-in from mod.io.

#ICYMI: The Latest Updates for RTX AI PCs

📹 Learn how visual effects shop Corridor Crew’s Niko Pueringer built his own green screen key tool, powered by NVIDIA RTX GPUs, at NAB. Stop by the Puget Systems booth on Monday, April 20, at 1 p.m. PT for a special presentation, or tune in on NVIDIA Studio’s YouTube channel on Tuesday, April 21, at 12 p.m. PT to watch the full session.

🖼 Also at NAB, join NVIDIA’s Sabour Amirazodi for a special presentation at the ASUS booth on Tuesday, April 21, at 11 a.m. PT. Amirazodi will showcase how guiding generative AI can produce creative outputs like storyboards or entire movie trailers — based on a single image input. 

📽 Check out content creator Gavin Herman’s Studio Session, “How to Edit Professional Talking Head Videos in DaVinci Resolve,” on the NVIDIA Studio YouTube channel. Generative workflow specialists can watch this two-hour, instructor-led workshop on how to use NVIDIA GPU acceleration for ComfyUI.

🦞 LM Studio is now an official OpenClaw provider. OpenClaw can now run local models through LM Studio on NVIDIA GPUs, unlocking faster on-device performance.

🦥 Unsloth and NVIDIA have teamed up to eliminate hidden bottlenecks that slow down fine-tuning on NVIDIA GPUs, improving fine-tuning performance by 15%. 

✨ Google’s Gemma 4 family of omni-capable models are built for local AI across a wide range of devices. Google and NVIDIA have optimized Gemma 4 for NVIDIA GPUs, enabling efficient performance on NVIDIA RTX-powered PCs and workstations, NVIDIA DGX Spark personal AI supercomputers and NVIDIA Jetson Orin Nano edge AI modules.

📽 Check out this NVIDIA GTC session on how developers can build, run and optimize AI agents locally on NVIDIA GPUs, covering everything from quantization to backends like Ollama and applications like OpenClaw and ComfyUI.

👀 Wondershare Filmora has added a new feature for Eye Contact Correction based on the NVIDIA Broadcast Eye Contact feature. This feature runs on the cloud on NVIDIA GPUs, designed to refine the gaze of subjects in post production for a more natural, confident and camera-ready look, delivering polished, professional videos in seconds. 

Filmora’s AI Eye Contact Correction feature powered in the cloud by NVIDIA GPUs.

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.

Follow NVIDIA Workstation on LinkedIn and X

Strength and Destiny Collide: ‘Samson: A Tyndalston Story’ Arrives in the Cloud

A timeless story of grit, faith and rebellion takes center stage as Samson: A Tyndalston Story joins the GeForce NOW library today. 

The highly anticipated release from Liquid Swords can now be streamed on nearly any device with GeForce NOW bringing cinematic intensity and mythic storytelling to the cloud.

Catch it as part of four new games in the cloud this week.

Stream the Power

Samson on GeForce NOW
A new legend rises.

Tyndalston is a city built on debt, muscle and memory. Samson: A Tyndalston Story from Liquid Swords follows Samson, a former enforcer pulled back to the streets that made him. Violence is currency as every fight is personal, every hit carries history and every escape feels earned in a city that never forgives.

Gameplay blends cinematic melee action with choice-driven narrative progression. Every confrontation — from shadowed alley brawls to large-scale set pieces — feels purposeful, reflecting Samson’s internal struggle between vengeance and redemption. Brawls hit fast and close. Cars aren’t set pieces — they’re weapons. Momentum and terrain decide if the player walks away or falls harder. Every job, debt and decision cuts toward freedom or collapse.

The game takes full advantage of ray-traced global illumination, reflections and shadows, creating a city that feels cinematic and alive. NVIDIA DLSS 3.5 boosts performance, while NVIDIA Reflex technology cuts down latency to keep controls razor-sharp during split-second fights. With GeForce NOW, the experience streams instantly at maximum fidelity, even without the latest hardware. No waiting around for downloads or worrying about system specs, just dive straight into the grit and glow of Tyndalston.

Celebrate New Games

No arms, no problem.
No arms, no problem.

Celebrate three decades of Rayman with the definitive edition of the platforming classic in Rayman 30th Anniversary Edition, featuring five versions from iconic consoles, over 120 additional levels and an exclusive documentary that explores the creation of the limbless hero. Stream it on GeForce NOW without having to wait around for downloads or updates. 

In addition, members can look for the following:

GeForce RTX 5080-ready game this week, in addition to Samson and Morbid Metal:

What are you planning to play this weekend? Let us know on X or in the comments below.

National Robotics Week — Latest Physical AI Research, Breakthroughs and Resources

This National Robotics Week, NVIDIA is highlighting the breakthroughs that are bringing AI into the physical world — as well as the growing wave of robots transforming industries, from agricultural and manufacturing to energy and beyond.

Advancements in robot learning, simulation and foundation models are accelerating development, enabling robots to move from training in virtual environments to real-world deployment faster than ever.

With NVIDIA platforms for simulation, synthetic data and AI-powered robot learning, developers now have the tools to build machines that can perceive, reason and act in complex environments.

Check back here all week for coverage on the latest NVIDIA physical AI technologies.

From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI

Open models are driving a new wave of on-device AI, extending innovation beyond the cloud to everyday devices. As these models advance, their value increasingly depends on access to local, real-time context that can turn meaningful insights into action. 

Designed for this shift, Google’s latest additions to the Gemma 4 family introduce a class of small, fast and omni-capable models built for efficient local execution across a wide range of devices.  

Google and NVIDIA have collaborated to optimize Gemma 4 for NVIDIA GPUs, enabling efficient performance across a range of systems — from data center deployments to NVIDIA RTX-powered PCs and workstations, the NVIDIA DGX Spark personal AI supercomputer and NVIDIA Jetson Orin Nano edge AI modules.

Gemma 4: Compact Models Optimized for NVIDIA GPUs 

The latest additions to the Gemma 4 family of open models spanning E2B, E4B, 26B and 31B variants  are designed for efficient deployment from edge devices to high-performance GPUs.  

All configurations measured using Q4_K_M quantizations BS = 1, ISL = 4096 and OSL = 128 on NVIDIA GeForce RTX 5090 and Mac M3 Ultra desktops. Token generation throughput measured on llama.cpp b7789, using the llama-bench tool.

This new generation of compact models supports a range of tasks, including: 

The E2B and E4B models are built for ultraefficient, low-latency inference at the edge, running completely offline with near-zero latency across many devices including Jetson Nano modules. 

The 26B and 31B modelsare designed for high-performance reasoning and developer-centric workflows, making them well suited for agentic AI. Optimized to deliver state-of-the-art, accessible reasoning, these models run efficiently on NVIDIA RTX GPUs and DGX Spark — powering development environments, coding assistants and agent-driven workflows.  

As local agentic AI continues to gain momentum, applications like OpenClaw are enabling always-on AI assistants on RTX PCs, workstations and DGX Spark. The latest Gemma 4 models are compatible with OpenClaw, allowing users to build capable local agents that draw context from personal files, applications and workflows to automate tasks. Learn how to run OpenClaw for free on RTX GPUs and DGX Spark or using the DGX Spark OpenClaw playbook. 

Check out the Google DeepMind announcement blog to learn more about the latest additions to Gemma 4 family.

Getting Started: Gemma 4 on RTX GPUs and DGX Spark 

NVIDIA has collaborated with Ollama and llama.cpp to provide the best local deployment experience for each of the Gemma 4 models.    

To use Gemma 4 locally, users can download Ollama to run Gemma 4 models or install llama.cpp and pair it with the Gemma 4 GGUF Hugging Face checkpoint. Additionally, Unsloth provides day-one support with optimized and quantized models for efficient local fine-tuning and deployment via Unsloth Studio. Start running and fine-tuning Gemma 4 in Unsloth Studio today. 

Running open models like the Gemma 4 family on NVIDIA GPUs achieves optimal performance because NVIDIA Tensor Cores accelerate AI inference workloads to deliver higher throughput and lower latency for local execution. Plus, the CUDA software stack ensures broad compatibility across leading frameworks and tools, enabling new models to run efficiently from day one.  

This combination allows open models like Gemma 4 to scale across a wide range of systems — from Jetson Orin Nano at the edge to RTX PCs, workstations and DGX Spark — without requiring extensive optimization. 

Check out the NVIDIA technical blog for more details on how to get started with Gemma 4 on NVIDIA GPUs and learn more about NVIDIA’s work on open models. 

#ICYMI: The Latest Updates for RTX AI PCs 

✨ Catch up on RTX AI Garage blogs for a host of agentic AI announcements from NVIDIA GTC, such as new open models for local agents. These models include NVIDIA Nemotron 3 Nano 4B and Nemotron 3 Super 120B, and optimizations for Qwen 3.5 and Mistral Small 4. 

 NVIDIA recently introduced NVIDIA NemoClaw, an open source stack that optimizes OpenClaw experiences on NVIDIA devices by increasing security and supporting local models.  

🚀 Accomplish.ai announced Accomplish FREE, a no-cost version of its open source desktop AI agent with built-in models. It harnesses NVIDIA GPUs to run open weight models locally, while a hybrid router dynamically balances workloads between local RTX hardware and the cloud — enabling fast, private, zero-configuration execution without requiring an application programming interface key. 

Plug in to NVIDIA AI PC on FacebookInstagramTikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. 

Follow NVIDIA Workstation on LinkedIn and X 

Press Start on April: GeForce NOW Brings 10 Games to the Cloud

No joke — GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with ten new titles, bringing fresh adventures to GeForce NOW, including the launch of Capcom’s highly anticipated PRAGMATA.

A dozen new games are available to stream this week, including Arknights: Endfield, which expands the acclaimed series into a full 3D real‑time strategy adventure. On GeForce NOW, every battle flows with precision and every mission looks sharper than ever.

So gear up, grab a controller or gaming device of choice, and get ready to stream — another month of great gaming is now underway.

Command the Frontier

Arknights Endfield on GeForce NOW
Reclaim the frontier using cloud technology.

Arknights: Endfield from Hypergryph expands the acclaimed Arknights universe into a full, 3D, real‑time strategy role-playing game. Blending tactical planning with sleek sci‑fi aesthetics, the title invites players into a world featuring terraformed settlements, advanced technology and looming threats beneath the planet’s surface.

Set on the perilous planet Talos‑II, Endfield follows a group of pioneers uncovering lost secrets and battling hostile factions. The game seamlessly merges base‑building, exploration and combat — with squads of operators coordinating in real time to overcome environmental hazards and powerful enemies. Every decision impacts survival, progress and the unfolding mystery of the world.

On GeForce NOW, Arknights: Endfield can be played at the highest settings from virtually any device, enabling crisp visuals and high performance without compromise. GeForce RTX rendering brings the game’s metallic skylines and glowing wastelands to life, while ultralow-latency streaming ensures every tactical command lands with precision. 

Spring Into April

MegaMan Star Force Legacy Collection
He’s back.

Capcom’s Mega Man Star Force Legacy Collection includes seven games and additional features, including a gallery of illustrations and music. Eleven‑year‑old Geo Stelar is a grieving boy who isolates himself after the mysterious disappearance of his astronaut father. His life changes when he encounters an extraterrestrial being named Omega‑Xis, granting him the power to become Mega Man. The collection streams instantly with GeForce NOW, turning any device into a Star Force terminal ready to save the world once more.

Check out what else is available this week:

And look forward to the games coming throughout the month:

More of March

In addition to the 15 games announced last month, a dozen more joined the GeForce NOW library:

What are you planning to play this weekend? Check out Crimson Desert on GeForce NOW in Anytime Anywhere Gaming’s YouTube review.

 

Efficiency at Scale: NVIDIA, Energy Leaders Accelerating Power‑Flexible AI Factories to Fortify the Grid

CERAWeek — dubbed the Davos of energy — is where policymakers, producers, technologists and financiers gather to discuss how the world powers itself next. 

NVIDIA and Emerald AI unveiled at the conference last week a new way forward — treating AI factories not as static power loads but as flexible, intelligent grid assets. This collaboration unifies accelerated computing, AI factory reference architectures and real‑time energy orchestration, helping large AI deployments connect to the grid faster, operate more efficiently and fortify system reliability.

Built on the NVIDIA Vera Rubin DSX AI Factory reference design and Emerald AI’s Conductor platform, the approach brings together compute, power networking and control into a single architecture. The result is an AI factory that can generate high‑value AI tokens while dynamically responding to grid conditions — flexing when needed, supporting reliability and reducing the need to overbuild infrastructure for peak demand. 

AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power and Vistra are working to build the energy generation capacity needed to meet rapidly growing power demand. The companies plan to collaborate on optimized generation strategies to support AI factories built on the NVIDIA and Emerald AI architecture, including hybrid projects that use co‑located power to accelerate time to power while delivering value to the broader grid. By pairing large AI loads with flexible operations, new generation resources and intelligent controls, this approach strengthens grid reliability. 

It’s an important milestone in grid resilience, supported by an ecosystem for advanced AI factories. This new computing infrastructure paradigm — described by NVIDIA founder and CEO Jensen Huang as a five-layer AI cake — has energy as its foundational layer. 

Driving Improvements in Tokens Per Second Per Watt

Power constraints are reshaping AI data centers, with energy efficiency or performance per watt, specifically tokens per second per watt, the defining metric of our modern computing infrastructure. By prioritizing computational efficiency, organizations can lower operating costs, maximize revenue and create a resilient digital infrastructure for businesses and consumers across America and worldwide. 

“Power is a concern, but it’s not the only concern,” Huang said on a recent Lex Fridman podcast. “That’s the reason why we’re pushing so hard on extreme codesign, so that we can improve the tokens per second per watt orders of magnitude every single year.” 

NVIDIA has a long history of driving performance and energy efficiency. From the NVIDIA Kepler GPU in 2012 to the NVIDIA Vera Rubin platform this year, the number of tokens generated within the same power budget has increased by more than 1 million times. 

It takes industry collaboration across the five-layer AI cake — from energy to chips, infrastructure, models and applications — to make this happen.

Robotics, Digital Twins and AI Upskilling Drive Energy Advances

NVIDIA ecosystem partners showcased at the event how AI, simulation and workforce innovation are accelerating the energy infrastructure needed to support the intelligence era. Announcements from Maximo, TerraPower and Adaptive Construction Solutions exemplify how AI is compressing timelines across construction, power generation and talent development.

Maximo, a solar robotics company incubated at AES, announced the completion of a 100‑megawatt robotic solar installation at AES’ Bellefield site. Using AI‑driven robotics developed with NVIDIA accelerated computing, NVIDIA Omniverse libraries and the NVIDIA Isaac Sim framework, Maximo demonstrated that autonomous installations can now operate reliably at utility scale. The approach improves installation speed, safety and consistency, helping close the gap between rising electricity demand and construction capacity.

TerraPower, working with SoftServe, previewed an NVIDIA Omniverse‑powered digital twin platform designed to dramatically shorten advanced nuclear plant siting and design timelines. By applying AI and simulation to early‑stage engineering, the platform reduces design cycles from years to months, accelerating deployment of TerraPower’s Natrium energy plants while improving design and grid integration.

Adaptive Construction Solutions announced a national registered apprenticeship initiative, in collaboration with NVIDIA, to help build the skilled workforce required for AI factories and energy infrastructure. The program aims to scale training for critical trades, expanding access to high‑demand careers while supporting the rapid buildout of AI‑driven power systems.

The efforts articulated how AI, digital twins and workforce innovation are converging to deliver faster, more resilient energy infrastructure.

Coming Together on Scaling AI Factories for Grid Reliability 

GE Vernova, Schneider Electric and Vertiv highlighted how digital twins, validated reference designs and converged infrastructure are becoming essential to scaling AI factories as reliable grid participants. The announcements address the “power‑to‑rack” challenge — designing AI infrastructure as an integrated energy and compute system from day one. 

GE Vernova outlined how high‑fidelity digital twins aligned with the NVIDIA Omniverse DSX Blueprint enable utilities and developers to simulate grid behavior, substations and AI factory loads together before deployment. Such system‑level modeling helps validate interconnection strategies, reduce risk and accelerate time to power in constrained grid environments.

Schneider Electric announced new validated NVIDIA Vera Rubin reference designs and lifecycle digital twin architectures developed with AVEVA. By simulating power, cooling and controls in Omniverse, Schneider enables operators to optimize performance per watt, validate designs before buildout and operate AI factories more efficiently and predictably at scale.

Vertiv outlined converged, simulation‑ready physical infrastructure built on repeatable power and cooling building blocks. Integrated with the Vera Rubin DSX reference design, Vertiv’s approach reduces design and deployment complexity while supporting faster, more confident scaling of AI factories.

Together, these industry efforts provide a digital path forward, including the validated architectures and physical infrastructure needed to turn AI factories into flexible, grid‑aware assets for efficiently powering the world.

Learn more about how NVIDIA and its partners are advancing energy solutions with AI and high-performance computing.