Software Developers Launch OpenUSD and Generative AI-Powered Product Configurators Built on NVIDIA Omniverse

From designing dream cars to customizing clothing, 3D product configurators are ringing in a new era of hyper-personalization that will benefit retailers and consumers.

Developers are delivering innovative virtual product experiences and automated personalization using Universal Scene Description (aka OpenUSD), NVIDIA RTX technologies from NVIDIA Omniverse software development kits (SDKs) and application programming interfaces (APIs), and generative AI from NVIDIA Edify models.

Together, these technologies enable developers to create configurator applications that deliver physically accurate, photoreal digital twins of products, revolutionizing the way brands personalize buyer journeys at unprecedented scale.

For example, Dassault Systèmes’ 3DEXCITE brand is adopting Omniverse Cloud APIs to enable interoperability with generative AI services, such as Shutterstock’s Edify3D or Edify 360, directly inside its web-based application.

By using NVIDIA Edify-powered models, trained by Shutterstock, Dassault Systèmes can generate stunning 3D environments from text prompts to instantly personalize scenes representing physically accurate products. And with Omniverse APIs, the company can supercharge the web-based app with real-time ray-traced rendering.

Many other developers are also building 3D product configurator software and solutions with NVIDIA Omniverse SDKs and APIs.

CGI studio Katana has developed a content creation application, COATCreate, used by manufacturers such as Nissan, that allows marketing assets to be staged and created faster with product digital twins. COATCreate also enables users to view and customize the digital twin while wearing an Apple Vision Pro headset, unlocking real-time ray-traced extended reality experiences.

Brickland, another CGI studio, is developing real-time virtual experiences that allow users to customize clothing by choosing from predefined parameters such as color and material. Through their Digitex initiative, Brickland is expanding into digital textures and allowing consumers to visualize and interact with extreme levels of detail in their 3D assets thanks to RTX real-time rendering

Configit connected its powerful configurator logic tool Configit Ace to Omniverse and OpenUSD by streamlining the management of the complex rules system behind the creation of configurators and combining it with the rendering capabilities of Omniverse and RTX. This allows for rapid creation of articulated product configurators and enables the configurator developers to power real-time ray-traced rendering in their solutions.

WPP has developed a content engine that harnesses OpenUSD and AI to enable creative teams to produce high-quality commercial content faster, more efficiently and at scale while remaining aligned with a client’s brand.

Media.Monks has developed an AI-centric professional managed service that leverages Omniverse called Monks.Flow, which helps brands virtually explore different customizable product designs and unlock scale and hyper-personalization across any customer journey.

Accenture Song, the world’s largest tech-powered creative group, is using Omniverse SDKs to generate marketing content for Defender vehicles. Using it with the Edify-powered generative AI microservice, Accenture Song is enabling the creation of cinematic 3D environments via conversational prompts.

Product Digital Twins in the Era of Industrial Digitalization

Forecasts indicate that consumer purchases, including high-value items like vehicles and luxury goods, will increasingly take place online in the coming decade. 3D product digital twins and automated personalization with generative AI serve as invaluable tools for brands to showcase their products and enhance customer engagement in the changing retail landscape.

3D configurators provide tangible benefits for businesses, including increased average selling prices, reduced return rates and stronger brand loyalty. Once a digital twin is built, it can serve many purposes and be updated to meet shifting consumer preferences with minimal time, cost and effort.

Creating a 3D Product Configurator

The process of creating a 3D product configurator begins with harnessing OpenUSD’s powerful composition engine and interoperability. These features enable developers to create dynamic, interactive experiences that accurately reflect the nuances of each product.

Teams can also integrate generative AI technologies into OpenUSD-based product configurators using NVIDIA Omniverse APIs to enhance the realism and customization options available to users. By leveraging AI, configurators can intelligently adapt to user inputs, offering personalized recommendations and dynamically adjusting product configurations in real time. And with NVIDIA Graphics Delivery Network , high-quality, real-time viewports can be embedded into web applications so consumers can browse products in full fidelity, on nearly any device.

The possibilities for 3D product configurators are virtually limitless, applicable across a wide range of industries and use cases.

To start, get NVIDIA Omniverse and follow along with a tutorial series.

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It’s official: NVIDIA delivered the world’s fastest platform in industry-standard tests for inference on generative AI.

In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM — software that speeds and simplifies the complex job of inference on large language models — boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago.

The dramatic speedup demonstrates the power of NVIDIA’s full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI.

Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM  — a set of inference microservices that includes inferencing engines like TensorRT-LLM — makes it easier than ever for businesses to deploy NVIDIA’s inference platform.

Raising the Bar in Generative AI

TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs — the latest, memory-enhanced Hopper GPUs — delivered the fastest performance running inference in MLPerf’s biggest test of generative AI to date.

The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks.

The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf’s Llama 2 benchmark.

The H200 GPU results include up to 14% gains from a custom thermal solution. It’s one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

Memory Boost for NVIDIA Hopper GPUs

NVIDIA is shipping H200 GPUs today. They’ll be available soon from nearly 20 leading system builders and cloud service providers.

H200 GPUs pack 141GB of HBM3e running at 4.8TB/s. That’s 76% more memory flying 43% faster compared to H100 GPUs. These accelerators plug into the same boards and systems and use the same software as H100 GPUs.

With HBM3e memory, a single H200 GPU can run an entire Llama 2 70B model with the highest throughput, simplifying and speeding inference.

GH200 Packs Even More Memory

Even more memory — up to 624GB of fast memory, including 144GB of HBM3e — is packed in NVIDIA GH200 Superchips, which combine on one module a Hopper architecture GPU and a power-efficient NVIDIA Grace CPU. NVIDIA accelerators are the first to use HBM3e memory technology.

With nearly 5 TB/second memory bandwidth, GH200 Superchips delivered standout performance, including on memory-intensive MLPerf tests such as recommender systems.

Sweeping Every MLPerf Test

On a per-accelerator basis, Hopper GPUs swept every test of AI inference in the latest round of the MLPerf industry benchmarks.

The benchmarks cover today’s most popular AI workloads and scenarios, including generative AI, recommendation systems, natural language processing, speech and computer vision. NVIDIA was the only company to submit results on every workload in the latest round and every round since MLPerf’s data center inference benchmarks began in October 2020.

Continued performance gains translate into lower costs for inference, a large and growing part of the daily work for the millions of NVIDIA GPUs deployed worldwide.

Advancing What’s Possible

Pushing the boundaries of what’s possible, NVIDIA demonstrated three innovative techniques in a special section of the benchmarks called the open division, created for testing advanced AI methods.

NVIDIA engineers used a technique called structured sparsity — a way of reducing calculations, first introduced with NVIDIA A100 Tensor Core GPUs — to deliver up to 33% speedups on inference with Llama 2.

A second open division test found inference speedups of up to 40% using pruning, a way of simplifying an AI model — in this case, an LLM — to increase inference throughput.

Finally, an optimization called DeepCache reduced the math required for inference with the Stable Diffusion XL model, accelerating performance by a whopping 74%.

All these results were run on NVIDIA H100 Tensor Core GPUs.

A Trusted Source for Users

MLPerf’s tests are transparent and objective, so users can rely on the results to make informed buying decisions.

NVIDIA’s partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI systems and services. Partners submitting results on the NVIDIA AI platform in this round included ASUS, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google, Hewlett Packard Enterprise, Lenovo, Microsoft Azure, Oracle, QCT, Supermicro, VMware (recently acquired by Broadcom) and Wiwynn.

All the software NVIDIA used in the tests is available in the MLPerf repository. These optimizations are continuously folded into containers available on NGC, NVIDIA’s software hub for GPU applications, as well as NVIDIA AI Enterprise — a secure, supported platform that includes NIM inference microservices.

The Next Big Thing  

The use cases, model sizes and datasets for generative AI continue to expand. That’s why MLPerf continues to evolve, adding real-world tests with popular models like Llama 2 70B and Stable Diffusion XL.

Keeping pace with the explosion in LLM model sizes, NVIDIA founder and CEO Jensen Huang announced last week at GTC that the NVIDIA Blackwell architecture GPUs will deliver new levels of performance required for the multitrillion-parameter AI models.

Inference for large language models is difficult, requiring both expertise and the full-stack architecture NVIDIA demonstrated on MLPerf with Hopper architecture GPUs and TensorRT-LLM. There’s much more to come.

Learn more about MLPerf benchmarks and the technical details of this inference round.

Unlocking Peak Generations: TensorRT Accelerates AI on RTX PCs and Workstations

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users.

As generative AI advances and becomes widespread across industries, the importance of running generative AI applications on local PCs and workstations grows. Local inference gives consumers reduced latency, eliminates their dependency on the network and enables more control over their data.

NVIDIA GeForce and NVIDIA RTX GPUs feature Tensor Cores, dedicated AI hardware accelerators that provide the horsepower to run generative AI locally.

Stable Video Diffusion is now optimized for the NVIDIA TensorRT software development kit, which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs.

Now, the TensorRT extension for the popular Stable Diffusion WebUI by Automatic1111 is adding support for ControlNets, tools that give users more control to refine generative outputs by adding other images as guidance.

TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which internal tests have shown accurately replicates real-world performance. It delivered speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation.

More Efficient and Precise AI

TensorRT enables developers to access the hardware that provides fully optimized AI experiences. AI performance typically doubles compared with running the application on other frameworks.

It also accelerates the most popular generative AI models, like Stable Diffusion and SDXL. Stable Video Diffusion, Stability AI’s image-to-video generative AI model, experiences a 40% speedup with TensorRT.

The optimized Stable Video Diffusion 1.1 Image-to-Video model can be downloaded on Hugging Face.

Plus, the TensorRT extension for Stable Diffusion WebUI boosts performance by up to 2x — significantly streamlining Stable Diffusion workflows.

With the extension’s latest update, TensorRT optimizations extend to ControlNets — a set of AI models that help guide a diffusion model’s output by adding extra conditions. With TensorRT, ControlNets are 40% faster.

TensorRT optimizations extend to ControlNets for improved customization.

Users can guide aspects of the output to match an input image, which gives them more control over the final image. They can also use multiple ControlNets together for even greater control. A ControlNet can be a depth map, edge map, normal map or keypoint detection model, among others.

Download the TensorRT extension for Stable Diffusion Web UI on GitHub today.

Other Popular Apps Accelerated by TensorRT

Blackmagic Design adopted NVIDIA TensorRT acceleration in update 18.6 of DaVinci Resolve. Its AI tools, like Magic Mask, Speed Warp and Super Scale, run more than 50% faster and up to 2.3x faster on RTX GPUs compared with Macs.

In addition, with TensorRT integration, Topaz Labs saw an up to 60% performance increase in its Photo AI and Video AI apps — such as photo denoising, sharpening, photo super resolution, video slow motion, video super resolution, video stabilization and more — all running on RTX.

Combining Tensor Cores with TensorRT software brings unmatched generative AI performance to local PCs and workstations. And by running locally, several advantages are unlocked:

Performance: Users experience lower latency, since latency becomes independent of network quality when the entire model runs locally. This can be important for real-time use cases such as gaming or video conferencing. NVIDIA RTX offers the fastest AI accelerators, scaling to more than 1,300 AI trillion operations per second, or TOPS.
Cost: Users don’t have to pay for cloud services, cloud-hosted application programming interfaces or infrastructure costs for large language model inference.
Always on: Users can access LLM capabilities anywhere they go, without relying on high-bandwidth network connectivity.
Data privacy: Private and proprietary data can always stay on the user’s device.

Optimized for LLMs

What TensorRT brings to deep learning, NVIDIA TensorRT-LLM brings to the latest LLMs.

TensorRT-LLM, an open-source library that accelerates and optimizes LLM inference, includes out-of-the-box support for popular community models, including Phi-2, Llama2, Gemma, Mistral and Code Llama. Anyone — from developers and creators to enterprise employees and casual users — can experiment with TensorRT-LLM-optimized models in the NVIDIA AI Foundation models. Plus, with the NVIDIA ChatRTX tech demo, users can see the performance of various models running locally on a Windows PC. ChatRTX is built on TensorRT-LLM for optimized performance on RTX GPUs.

NVIDIA is collaborating with the open-source community to develop native TensorRT-LLM connectors to popular application frameworks, including LlamaIndex and LangChain.

These innovations make it easy for developers to use TensorRT-LLM with their applications and experience the best LLM performance with RTX.

Get weekly updates directly in your inbox by subscribing to the AI Decoded newsletter.

Viome’s Guru Banavar Discusses AI for Personalized Health

In the latest episode of NVIDIA’s AI Podcast, Viome Chief Technology Officer Guru Banavar spoke with host Noah Kravitz about how AI and RNA sequencing are revolutionizing personalized healthcare. The startup aims to tackle the root causes of chronic diseases by delving deep into microbiomes and gene expression.

With a comprehensive testing kit, Viome translates biological data into practical dietary recommendations. Viome is forging ahead with professional healthcare solutions, such as early detection tests for diseases, and integrating state-of-the-art technology with traditional medical practices for a holistic approach to wellness.

The AI Podcast · Personalized Health: Viome’s Guru Banavar Discusses Startup’s AI-Driven Approach – Ep. 352

Time Stamps:

2:00: Introduction to Viome and the science of nutrigenomics
4:25: The significance of RNA over DNA in health analysis
7:40: The crucial role of the microbiome in understanding chronic diseases
12:50: From sample collection to personalized nutrition recommendations
17:35: Viome’s expansion into professional healthcare solutions and early disease detection

 

Boom in AI-Enabled Medical Devices Transforms Healthcare

The future of healthcare is software-defined and AI-enabled. Around 700 FDA-cleared, AI-enabled medical devices are now on the market — more than 10x the number available in 2020 

Many of the innovators behind this boom announced their latest AI-powered solutions at NVIDIA GTC, a global conference that last week attracted more than 16,000 business leaders, developers and researchers in Silicon Valley and many more online. 

Designed to make healthcare more efficient and help improve patient outcomes, these new technologies include foundation models to accelerate ultrasound analysis, augmented and virtual reality solutions for cardiac imaging, and generative AI software to support surgeons. 

Shifting From Hardware to Software-Defined Medical Devices 

Medical devices have long been hardware-centric, relying on intricate designs and precise engineering. They’re now shifting to be software-defined, meaning they can be enhanced over time through software updates — the same way that smartphones can be upgraded with new apps and features for years before a user upgrades to a new device.  

This new approach, supported by NVIDIA’s domain-specific platforms for real-time accelerated computing, is taking center stage because of its potential to transform patient care, increase efficiencies, enhance the clinician experience and drive better outcomes. 

Leading medtech companies such as GE Healthcare are using NVIDIA technology to develop, fine-tune and deploy AI for software-defined medical imaging applications.   

GE Healthcare announced at GTC that it used NVIDIA tools including the TensorRT software development kit to develop and optimize SonoSAMTrack, a recent research foundation model that delineates and tracks organs, structures or lesions across medical images with just a few clicks. The research model has the potential to simplify and speed up ultrasound analysis for healthcare professionals. 

Powering the Next Generation of Digital Surgery 

With the NVIDIA IGX edge computing platform and NVIDIA Holoscan medical-grade edge AI platform, medical device companies are accelerating the development and deployment of AI-powered innovation in the operating room.  

Johnson & Johnson MedTech is working with NVIDIA to test new AI capabilities for the company’s connected digital ecosystem for surgery. It aims to enable open innovation and accelerate the delivery of real-time insights at scale to support medical professionals before, during and after procedures.  

Paris-based robotic surgery company Moon Surgical is using Holoscan and IGX to power its Maestro System, which is used in laparoscopy, a technique where surgeons operate through small incisions with an internal camera and instruments.  

Maestro’s ScoPilot enables surgeons to control a laparoscope without taking their hands off other surgical tools during an operation.  To date, it’s been used to treat over 200 patients successfully.  

Moon Surgical and NVIDIA are also collaborating to bring generative AI features to the operating room using Maestro and Holoscan. 

NVIDIA Platforms Power Thriving Medtech Ecosystem 

A growing number of medtech companies and solution providers is making it easier for customers to adopt NVIDIA’s edge AI platforms to enhance and accelerate healthcare.  

Arrow Electronics is delivering IGX as a subscription-like platform-as-a-service for industrial and medical customers. Customers who have adopted Arrow’s business model to accelerate application deployment include Kaliber AI, a company developing AI tools to assist minimally invasive surgery. At GTC, Kaliber showcased AI-generated insights for surgeons and a large language model to respond to patient questions. 

Global visualization leader Barco is adopting Holoscan and IGX to build a turnkey surgical AI platform for customers seeking an off-the-shelf offering that allows them to focus their engineering resources on application development. The company is working with SoftAcuity on two Holoscan-based products that will include generative AI voice control and AI-powered data analytics.  

And Magic Leap has integrated Holoscan in its extended reality software stack, enhancing the capabilities of customers like Medical iSight — a software developer building real-time, intraoperative support for minimally invasive treatments of stroke and neurovascular conditions. 

Learn more about NVIDIA-accelerated medtech 

Get started on NVIDIA NGC or visit ai.nvidia.com to experiment with more than two dozen healthcare microservices 

Subscribe to NVIDIA healthcare news

Model Innovators: How Digital Twins Are Making Industries More Efficient

A manufacturing plant near Hsinchu, Taiwan’s Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins.

A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems.

In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations

Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C.

A simulation that would’ve taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup.

The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.

A bird’s-eye view of the model of Wistron’s computer test room.

With their Omniverse-powered software, Wistron created realistic and immersive simulations that operators interact with via VR headsets. And thanks to the AI models they developed using Modulus, the airflows in the simulation obey the laws of physics.

“Physics-informed models let us control the test process and the room’s temperature remotely in near real time, saving time and energy,” said John Lu, a manufacturing operations director at Wistron.

Specifically, Wistron combined separate models for predicting air temperature and airflow to eliminate risks of overheating in the test room. It also created a recommendation system to identify the best locations to test computer baseboards.

The digital twin, linked to thousands of networked sensors, enabled Wistron to increase the facility’s overall energy efficiency up to 10%. That amounts to using up to 121,600 kWh less electricity a year, reducing carbon emissions by a whopping 60,192 kilograms.

An Expanding Effort

Currently, the group is expanding its AI model to track more than a hundred variables in a space that holds 50 computer racks. The team is also simulating all the mechanical details of the servers and testers.

“The final model will help us optimize test scheduling as well as the energy efficiency of the facilities’ air conditioning system,” said Derek Lai, a Wistron technical supervisor with expertise in physics-informed neural networks.

Looking ahead, “The tools and applications we’re building with Omniverse help us improve the layout of our DGX factories to provide the best throughput, further improving efficiency,” said Liu.

Efficiently Generating Energy

Half a world away, Siemens Energy is demonstrating the power of digital industrialization using Modulus and Omniverse.

The Munich-based company, whose technology generates one-sixth of the world’s electricity, achieved a 10,000x speedup simulating a heat-recovery steam generator using a physics-informed AI model (see video below).

Using a digital twin to detect corrosion early on, these massive systems can reduce downtime by 70%, potentially saving the industry $1.7 billion annually compared to a standard simulation that took half a month.

“The reduced computational time enables us to develop energy-efficient digital twins for a sustainable, reliable and affordable energy ecosystem,” said Georg Rollmann, head of advanced analytics and AI at Siemens Energy.

Digital Twins Drive Science and Industry

Automotive companies are applying the technology to the design of new cars and manufacturing plants. Scientists are using it in fields as diverse as astrophysics, genomics and weather forecasting. It’s even being used to create a digital twin of Earth to understand and mitigate the impacts of climate change.

Every year, physics simulations, typically run on supercomputer-class systems, consume an estimated 200 billion CPU core hours and 4 terawatt hours of energy. Physics-informed AI is accelerating these complex workflows 200x on average, saving time, cost and energy.

For more insights, listen to a talk from GTC describing Wistron’s work and a panel about industries using generative AI.

Learn more about the impact accelerated computing is having on sustainability.

Into the Omniverse: Groundbreaking OpenUSD Advancements Put NVIDIA GTC Spotlight on Developers

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

The Universal Scene Description framework, aka OpenUSD, has emerged as a game-changer for building virtual worlds and accelerating creative workflows. It can ease the handling of complex datasets, facilitate collaboration and enable seamless interoperability between 3D applications.

The latest news and demos from NVIDIA GTC, a global AI conference that ran last week, put on display the power developers gain from NVIDIA Omniverse — a platform of application programming interfaces (APIs) and software development kits (SDKs) that enable them to build 3D pipelines, tools, applications and services.

Newly announced NVIDIA Omniverse Cloud APIs, coming first to Microsoft Azure, allow developers to send their OpenUSD industrial scenes from content-creation applications to the NVIDIA Graphics Delivery Network.

Such a workflow was showcased in a demo featuring an interactive car configurator application, developed by computer-generated-imagery studio Katana using Omniverse, streamed in full fidelity to an Apple Vision Pro’s high-resolution display. A designer wearing the Vision Pro toggled through paint and trim options, and even entered the vehicle.

In a separate demo, Dassault Systèmes showcased, using its 3DEXCITE portfolio, a powerful web-based application for 3D data preparation supercharged with NVIDIA AI and Omniverse Cloud APIs to deliver new generative storytelling capabilities.

OpenUSD also played a part in the announcement of NVIDIA’s latest AI supercomputer, a powerful cluster based on the NVIDIA GB200 NVL72 liquid-cooled system, which was showcased as a digital twin in Omniverse.

Engineers unified and visualized multiple computer-aided design datasets with full physical accuracy and photorealism using OpenUSD through the Cadence Reality digital twin platform, powered by Omniverse APIs. The technologies together provided a powerful computing platform for developing OpenUSD-based 3D tools, workflows and applications.

Siemens announced it has integrated OpenUSD into its Xcelerator platform applications via Omniverse Cloud APIs, enabling its customers to unify their 3D data and services in digital twins with physically based rendering.

A demo showcased how ship manufacturer HD Hyundai used Siemens’ Teamcenter X, which is part of Xcelerator, to design digital twins of complex engineering projects, delivering accelerated collaboration, minimized workflow waste, time and cost savings, and reduced manufacturing defects.

OpenUSD Ecosystem Updates on Replay

The latest OpenUSD ecosystem updates shared at GTC include:

Ansys is adopting OpenUSD and Omniverse Cloud APIs to enable data interoperability and NVIDIA RTX visualization in technologies such as Ansys AVxcelerate for autonomous vehicles, Ansys Perceive EM for 6G simulation, and NVIDIA-accelerated solvers such as Ansys Fluent.
Dassault Systèmes is using OpenUSD, Omniverse Cloud APIs and Shutterstock 3D AI Services for generative storytelling in 3DEXCITE applications.
Continental is developing an OpenUSD-based digital twin platform to optimize factory operations and speed time to market.
Hexagon is integrating reality-capture sensors and digital-reality platforms with OpenUSD and Omniverse Cloud APIs for hyperrealistic simulation and visualization.
Media.Monks is adopting Omniverse for a generative AI- and OpenUSD-enabled content-creation pipeline for scalable hyper-personalization.
Microsoft is integrating Omniverse Cloud APIs with Microsoft Power BI, so factory operators can see real-time factory data overlaid on a 3D digital twin to speed up production.
Rockwell Automation is using OpenUSD and Omniverse Cloud APIs for RTX-enabled visualization in industrial automation and digital transformation.
Trimble is enabling interactive NVIDIA Omniverse RTX viewers with Trimble model data using OpenUSD and Omniverse Cloud APIs.
Wistron is building OpenUSD-based digital twins of NVIDIA DGX and HGX factories using custom software developed with Omniverse SDKs and APIs.
WPP is expanding its Omniverse Cloud-based OpenUSD and generative AI content-generation engine to the retail and consumer packaged goods sector.

Get Plugged In to the World of OpenUSD

Several GTC sessions expanded on the latest OpenUSD advancements. Register free to watch them on demand:

The Big Bang of OpenUSD: Hear from technical luminaries at Pixar, Adobe, Apple, Autodesk and NVIDIA on the potential of OpenUSD.
An Introduction to OpenUSD: Learn why OpenUSD is more than just a file format and how it can revolutionize 3D workflows.
Mastering USD and Adobe Substance 3D: Learn how to master real-world material capture, seamlessly integrate workflows with OpenUSD and enhance realism with Adobe Substance 3D.
Enabling 3D Geospatial Workflows for Industrial Digital Twins: Learn how Cesium is using OpenUSD to enable high-fidelity streaming and rendering for global-scale digital twins.
Digitalizing the World’s Largest Industries With OpenUSD and Generative AI: Explore how global industries are becoming software-defined.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of Siemens, HD Hyundai.

NVIDIA Blackwell and Automotive Industry Innovators Dazzle at NVIDIA GTC

Generative AI, in the data center and in the car, is making vehicle experiences safer and more enjoyable.

The latest advancements in automotive technology were on display last week at NVIDIA GTC, a global AI conference that drew tens of thousands of business leaders, developers and researchers from around the world.

The event kicked off with NVIDIA founder and CEO Jensen Huang’s keynote, which included the announcement of the NVIDIA Blackwell platform — purpose-built to power a new era of AI computing.

The NVIDIA Blackwell GPU architecture will be integrated into the NVIDIA DRIVE Thor centralized car computer to enable generative AI applications and immersive in-vehicle experiences. Large language models will be able to run in the car, enabling an intelligent copilot that understands and speaks in natural language.

BYD, the world’s largest electric-vehicle maker, announced it will adopt DRIVE Thor as the AI brain of its future fleets. In addition, the company will use NVIDIA’s AI infrastructure for cloud-based AI development and training, and the NVIDIA Isaac and NVIDIA Omniverse platforms to develop tools and applications for virtual factory planning and retail configurators. Hyper, Nuro, Plus, Waabi, WeRide and XPENG are also adopting DRIVE Thor.

Learn more about the automotive ecosystem’s announcements at GTC:

Some of the latest NVIDIA-powered vehicles displayed on the exhibition floor included:

Aurora self-driving truck, already on the highways of Texas
Lucid Air long-range electric sedan
Mercedes-Benz Concept CLA Class, showcasing what’s to come
Nuro R3, a fully autonomous robotic delivery model
Polestar 3, the SUV for the electric age
Volvo Cars EX90, its new fully electric, flagship SUV
And WeRide’s Robobus, a new form of urban mobility.

Mercedes-Benz Concept CLA Class.

The NVIDIA auto booth highlighted the wide adoption of the NVIDIA DRIVE platform, with displays featuring electronic control units from a variety of partners, including Bosch, Lenovo and ZEEKR.

A wide range of NVIDIA automotive partners, including Ansys, Foretellix, Lenovo, MediaTek, NODAR, OMNIVISION, Plus, Seyond, SoundHound, Voxel51 and Waabi, all made next-generation product announcements at GTC.

In addition, the automotive pavilion buzzed with interest in the latest lidar advancements from Luminar and Robosense, as well as Helm.ai’s software offerings for the level 2 to level 4 autonomous driving stack.

And other partners, such as Ford, Geely, General Motors, Jaguar Land Rover and Zoox, participated in dozens of sessions and panels covering topics such as building data center applications and developing safe autonomous vehicles. Watch the sessions on demand.

Learn more about the latest advancements in generative AI and automotive technology by watching Huang’s GTC keynote in replay.

AI’s New Frontier: From Daydreams to Digital Deeds

Imagine a world where you can whisper your digital wishes into your device, and poof, it happens.

That world may be coming sooner than you think. But if you’re worried about AI doing your thinking for you, you might be waiting for a while.

In a fireside chat Wednesday at NVIDIA GTC, the global AI conference, Kanjun Qiu, CEO of Imbue, and Bryan Catanzaro, VP of applied deep learning research at NVIDIA, challenged many of the clichés that have long dominated conversations about AI.

Launched in October 2022, Imbue made headlines with its Series B fundraiser last year, raising over $200 million at a $1 billion valuation.

Bridging the Gap Between ‘Idea and Execution’

The discussion highlighted not only Imbue’s approach toward building practical AI agents able to automate menial, unrewarding work, but also painted a vivid picture of what the next chapter in AI innovation might hold.

“Our lives are full of so much friction … every single person’s vision can come to life,” Qiu said. “The barrier between idea and execution can be much smaller.”

Catanzaro’s reflections on the practical difficulties of using AI for simple tasks, such as his own challenges trying to get his digital assistant to help him find his next meeting, underscored the current limitations in human-AI interaction.

It turns out that figuring out where and when to go to a meeting, while easy for a human assistant, isn’t easy to automate.

“We tend to underestimate the things that we do naturally and overestimate the things that require reasoning,” Catanzaro observed. “One of the things humans deal with well is ambiguity.”

This set the stage for a broader discussion of the need for AI to move beyond mere code generation and become a dynamic, intuitive interface between humans and computers.

Qiu said the idea that AI can be a magical assistant, one that knows everything about you “isn’t necessarily the right paradigm.”

That’s because delegation is hard.

“When I’m delegating something, even to a human, I have to think a lot about ‘okay, how can I package this up so that the person will do the right thing?’”

Instead, the better model might be telling your computer to do anything you want. So you’re “telling your computer to do stuff and the agent is a middle layer,” she said.

Such agents will need to be able to interact with people — something often described as “reasoning,” the two observed — and communicate with computers — or “code.”

A Vision for Empowerment Through Technology

Qiu and Catanzaro — who often completed each other’s sentences during the 45-minute conversation — compared AI’s potential to democratize software creation to the Industrial Revolution’s impact on manufacturing.

The parts needed for a steam engine, for example, once took years to create. Now they can be ordered off the shelf for a small sum.

Both speakers emphasized the importance of creating intuitive interfaces that allow individuals from nontechnical backgrounds to engage with computers more effectively, fostering a more inclusive digital landscape.

That means going beyond coding, which is done in text-heavy environments such as an Integrated Development Environment, or even using text-based chats.

“The interface to agents, a lot of them today, is like a chat interface. It’s not a very good interface, in a lot of ways, very restrictive. And so there are much better ways of working with these systems,” Qiu said.

The Future of Personal Computing

Qiu and Catanzaro discussed the role that virtual worlds will play in this, and how they could serve as interfaces for human-technology interaction.

“I think it’s pretty clear that AI is going to help build virtual worlds,” said Catanzaro. “I think the maybe more controversial part is virtual worlds are going to be necessary for humans to interact with AI.”

People have an almost primal fear of being displaced, Catanzaro said, but what’s much more likely is that our capabilities will be amplified as the technology fades into the background.

Catanzaro compared it to the adoption of electricity. A century ago, people talked a lot about electricity. Now that it’s ubiquitous, it’s no longer the focus of broader conversations, even as it makes our day-to-day lives better.

“I think of it as really being able to [help us] control information environments … once we have control over information environments, we’ll feel a lot more empowered,” Qiu said. “Every single person’s vision can come to life.”

Here Be Dragons: ‘Dragon’s Dogma 2’ Comes to GeForce NOW

Arise for a new adventure with Dragon’s Dogma 2, leading two new titles joining the GeForce NOW library this week.

Set Forth, Arisen

Fulfill a forgotten destiny in “Dragon’s Dogma 2” from Capcom.

Time to go on a grand adventure, Arisen!

Dragon’s Dogma 2, the long-awaited sequel to Capcom’s legendary action role-playing game, streams this week on GeForce NOW.

The game challenges players to choose their own experience, including their Arisen’s appearance, vocation, party, approaches to different situations and more. Wield swords, bows and magick across an immersive fantasy world full of life and battle. But players won’t be alone. Recruit Pawns — mysterious otherworldly beings — to aid in battle and work with other players’ Pawns to fight the diverse monsters inhabiting the ever-changing lands.

Upgrade to a GeForce NOW Ultimate membership to stream Dragon’s Dogma 2 from NVIDIA GeForce RTX 4080 servers in the cloud for the highest performance, even on low-powered devices. Ultimate members also get exclusive access to servers to get right into gaming without waiting for any downloads.

New Games, New Challenges

No holding back.

Battlefield 2042: Season 7 Turning Point is here. Do whatever it takes to battle for Earth’s most valuable resource — water — in a Chilean desert. Deploy on a new map, Haven, focused on suburban combat, and revisit a fan-favorite front: Stadium. Gear up with new hardware like the SCZ-3 SMG or the Predator SRAW, and jump into a battle for ultimate power.

Then, look forward to the following list of games this week:

Alone in the Dark (New release on Steam, March 20)
Dragon’s Dogma 2 (New release on Steam, March 21)

What are you planning to play this weekend? Let us know on X or in the comments below.

we got that 𝙙𝙤𝙜 in us

— NVIDIA GeForce NOW (@NVIDIAGFN) March 20, 2024