UN Satellite Centre Works With NVIDIA to Boost Sustainable Development Goals

To foster climate action for a healthy global environment, NVIDIA is working with the United Nations Satellite Centre (UNOSAT) to apply the powers of deep learning and AI.

The effort supports the UN’s 2030 Agenda for Sustainable Development, which has at its core 17 interrelated Sustainable Development Goals. These SDGs — which include “climate action” and “sustainable cities and communities” — serve as calls to action for all UN member states to bolster global well-being.

The collaboration between UNOSAT, part of the United Nations Institute for Training and Research, and NVIDIA is initially focused on boosting climate-related disaster management by using AI for Earth Observation. AI4EO, as it’s known, is a term that encompasses initiatives using AI to help monitor and assess the planet’s changes.

To fast track research and development for its AI4EO efforts, UNOSAT will integrate its satellite imagery technology infrastructure with NVIDIA’s accelerated computing platform. The AI-powered satellite imagery system will collect and analyze geospatial information to provide near-real-time insights about floods, wildfires and other climate-related disasters.

In addition, UNOSAT has launched an educational module that builds upon an NVIDIA Deep Learning Institute (DLI) course on applying deep learning methods to generate accurate flood detection models.

“Working with NVIDIA will enable us to close the loop from AI research to implementation of climate solutions in the shortest time possible, ensuring that vulnerable populations can benefit from the technology,” said Einar Bjørgo, director of UNOSAT.

AI-Powered Satellite Imagery Analysis

For tasks like evaluating the impact of a tropical cyclone in the Philippines or a volcanic eruption in Tonga, UNOSAT’s emergency mapping service uses computer vision and satellite imagery analysis to gain accurate information about complex disasters.

Near-real-time analysis is key to managing climate-disaster events. Humanitarian teams can use the data-driven insights provided by AI to take rapid, effective action in combating disasters. The data is also used to inform sustainable development policies, develop users’ capacities and strengthen climate resilience overall.

UNOSAT will supercharge its satellite imagery technology infrastructure with NVIDIA DGX systems, which enable AI development at scale — as well as the NVIDIA EGX platform, which delivers the power of accelerated computing from the data center to the edge.

NVIDIA technology speeds up AI-based flood detection by 7x, covering larger areas with greater accuracy, according to UNOSAT.

NVIDIA DLI Course on Disaster Risk Monitoring

In addition to powerful technology, a skilled workforce is essential to using AI and data science to analyze and prevent climate events from becoming humanitarian disasters.

“NVIDIA and UNOSAT have a unique opportunity to combat the impact of climate change and advance the UN’s SDGs, with a launching point of training data scientists to develop and deploy GPU-accelerated models that improve flood prediction,” said Keith Strier, vice president of global AI initiatives at NVIDIA.

UNOSAT has developed a module for the Deep Learning Institute’s free online course that covers how to build a deep learning model to automate the detection of flood events.

Called Disaster Risk Monitoring Using Satellite Imagery, it’s the first NVIDIA DLI course focused on climate action for the global public sector community — with many additional climate-action-related courses being planned.

UNOSAT’s module — based on a real UN case study — highlights an example of a flood in Nepal.

In collaboration with NVIDIA, UNOSAT is offering the module for free with the goal of upskilling data scientists worldwide to harness accelerated computing to predict and respond to climate-related disasters.

“We aim to democratize access to accelerated computing to help nations train more accurate deep learning models that better predict and respond to a full spectrum of humanitarian and natural disasters,” Strier said.

Get started with the course, which is now available.

Learn more about how NVIDIA technology is used to improve the planet and its people.

The post UN Satellite Centre Works With NVIDIA to Boost Sustainable Development Goals appeared first on NVIDIA Blog.

Family Style: Li Auto L9 Brings Top-Line Luxury and Intelligence to Full-Size SUV With NVIDIA DRIVE Orin

Finally, there’s a family car any kid would want to be seen in.

Beijing-based startup Li Auto this week rolled out its second electric vehicle, the L9. It’s a full-size SUV decked out with the latest intelligent driving technology.

With AI features and an extended battery range of more than 800 miles, the L9 promises to elevate the playing field for luxury family vehicles.

Li Auto is deploying its newest automated driving features with the expansion of its vehicle lineup, using a software-defined compute platform built on two NVIDIA DRIVE Orin systems-on-a-chip (SoCs).

With more than 500 trillion operations per second (TOPS), the L9’s compute platform can run various deep neural networks simultaneously and in real time, all while ensuring the redundancy and diversity necessary for safety.

First-Class Safety and Security

As a top-line luxury model, the L9 sports only the best when it comes to AI-assisted driving technology.

All Li Auto vehicles come standard with the electric automaker’s advanced driver assistance system, Li AD Max. To achieve surround perception, the system uses one forward-facing lidar, 11 cameras, one radar and 12 ultrasonic sensors, as well as DRIVE Orin SoCs.

In addition to handling the large number of applications and deep neural networks necessary for autonomous driving, DRIVE Orin is architected to achieve systematic safety standards such as the ISO 26262 ASIL-D. Its dual processors provide fallback redundancies for each other, further ensuring safe operation.

The L9’s high-performance sensors also enable round-the-clock security features, monitoring both the car’s interior and exterior.

Innovative Infotainment

Inside the vehicle, five 3D-capable screens transform the in-cabin experience.

In the cockpit, a combined head-up display and confidence view enhance safety for the person driving. The head-up display projects key driving information onto the front windshield, and the interactive visualization feature of the vehicle’s perception system is located above the steering wheel, keeping the driver’s attention on the road.

The L9’s screens for central control, passenger entertainment and rear cabin entertainment are 15.7-inch, 3K-resolution, automotive-grade OLED displays that deliver first-class visual experiences for every occupant.

Passengers can also interact with the intelligent in-cabin system via interior sensors and natural language processing.

Designed to optimize all driver and passenger experiences, the L9 represents the top of the line for luxury family vehicles.

The post Family Style: Li Auto L9 Brings Top-Line Luxury and Intelligence to Full-Size SUV With NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Making an Impact: GFN Thursday Transforms Macs Into GeForce Gaming PCs

Thanks to the GeForce cloud, even Mac users can be PC gamers. This GFN Thursday, fire up your Macbook and get your game on.

This week brings eight more games to the GeForce NOW library. Plus, members can play Genshin Impact and claim a reward to start them out on their journeys streaming on GeForce NOW.

Mac User by Day, Gamer by Night

Love using a Mac, but can’t play the PC-only game that everyone’s talking about — like Genshin Impact or this week’s Epic Games Store free game, Car Mechanic Simulator 2018? GeForce NOW transforms nearly any Mac into a high-end gaming rig, rendering games at full quality and streaming them to Macbook Pros, Macbook Airs, iMacs and Mac Minis.

On GeForce NOW, you play the real PC versions of games without having to worry if something has been ported to Mac. Since the native PC version of games streams straight from the cloud, gamers can upgrade to the newest Apple hardware with confidence.

GeForce NOW RTX 3080 members can play on M1 Mac laptops at up to 1600p, or up to 4K resolution on supported external displays. Stream with even longer sessions lengths — up to eight hours. Members on RTX 3080 and Priority plans can even play with RTX ON for supported games, experiencing modern classics like Cyberpunk 2077 and Control with real-time ray tracing. No PC required.

Game saves are synced across each digital store for supported games, so members can play on Macs, as well as any other supported device, without losing progress.

Join today to see what it’s like to have the best of both PC and Mac worlds.

Get Started With Genshin Impact

This week brings the release of Genshin Impact, as well as rewards for Travelers playing on GeForce NOW.

Embark on a journey as a Traveler from another world and search for a missing sibling in the fantastic continent of Teyvat. Explore immersive landscapes, dive deep into rich quests alongside iconic characters and complete daily challenges, streaming across supported PCs, Macs and Chromebooks.

RTX 3080 members can even play with ultra-low latency, streaming at 1440p and 120 frames per second or in 4K resolution at 60 FPS on the PC and Mac apps.

Start the adventure off right with rewards in “Genshin Impact.”

Members who’ve opted in to rewards will receive an email for a starter kit that can be claimed through the NVIDIA Rewards redemption portal. The kit will become available in game once players reach Adventure Rank 10.

The reward includes 10,000 Mora to purchase various items, five Fine Enhancement Ores to enhance weapons, three Squirrel Fish and three Northern Apple Stews for fuel, and 10 Adventurer’s Experience points to level up characters.

Getting membership rewards for streaming games on the cloud is easy. Log in to your NVIDIA account and select “GEFORCE NOW” from the header, then scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that shows up to start receiving special offers and in-game goodies.

Jump Into the Newest Games

Get a little wild this week with new endangered animals to care for and more in the Planet Zoo: Conservation Pack.

There’s something for everyone on GeForce NOW. This week brings new in-game content like the Planet Zoo: Conservation Pack, the newest DLC for Frontier Developments’ ultimate zoo sim.

Members can also stream the following eight new titles this week:

Car Mechanic Simulator 2018 (Free on Epic Games Store, June 23-30)
Airborne Kingdom (Steam)
Fishing: North Atlantic (Steam)
Genshin Impact (Native Launcher)
Immortal Life (Steam)
King Arthur: Knight’s Tale (Steam)
SimAirport (Steam)
Skeleton Crew (Steam)

Finally, we’ve got a little challenge for you this week. Let us know your answer on Twitter or in the comments below.

share your favorite @GenshinImpact character using only emojis

we’ll go first:

— NVIDIA GeForce NOW (@NVIDIAGFN) June 22, 2022

The post Making an Impact: GFN Thursday Transforms Macs Into GeForce Gaming PCs appeared first on NVIDIA Blog.

Meet the Omnivore: Director of Photography Revs Up NVIDIA Omniverse to Create Sleek Car Demo

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

A camera begins in the sky, flies through some trees and smoothly exits the forest, all while precisely tracking a car driving down a dirt path. This would be all but impossible in the real world, according to film and photography director Brett Danton.

But Danton made what he calls this “impossible camera move” possible for an automotive commercial — at home, with cinematic quality and physical accuracy.

He pulled off the feat using NVIDIA Omniverse, a 3D design collaboration and world simulation platform that enhanced his typical creative workflow and connected various apps he uses, including Autodesk Maya, Epic Games Unreal Engine and Omniverse Create.

With 30+ years of experience in the digital imagery industry, U.K.-based Danton creates advertisements for international clients, showcasing products ranging from cosmetics to cars.

His latest projects, like the above using a Volvo car, demonstrate how a physical location can be recreated for a virtual shoot, delivering photorealistic rendered sequences that match cinematic real-world footage.

“This breaks from traditional imagery and shifts the gears of what’s possible in the digital arts, allowing multiple deliverables inside the one asset,” Danton said.

The physically accurate simulation capabilities of Omniverse took Danton’s project the extra mile, animating a photorealistic car that reacts to the dirt road’s uneven surface as it would in real life.

And by working with Universal Scene Description (USD)-based assets from connected digital content creation tools like Autodesk Maya and Unreal Engine in Omniverse, Danton collaborated with other art departments from his home, just outside of London.

“Omniverse gives me an entire studio on my desktop,” Danton said. “It’s impossible to tell the difference between the real location and what’s been created in Omniverse, and I know that because I went and stood in the real location to create the virtual set.”

Real-Time Collaboration for Multi-App Workflows

To create the forest featured in the car commercial, Danton collaborated with award-winning design studio Ars Thanea. The team shot countless 100-megapixel images to use as references, resulting in a point cloud — or set of data points representing 3D shapes in space — that totaled 250 gigabytes.

The team then used Omniverse as the central hub for all of the data exchange, accelerated by NVIDIA RTX GPUs. Autodesk Maya served as the entry point for camera animation and initial lighting before the project’s data was brought into Omniverse with an Omniverse Connector.

And with the Omniverse Create app, the artists placed trees by hand, created tree patches and tweaked them to fit the forest floor. Omniverse-based real-time collaboration was key for enabling high-profile visual effects artists to work together remotely and on site, Danton said.

Omniverse Create uses Pixar’s USD format to accelerate advanced scene composition and assemble, light, simulate and render 3D scenes in real time.

Photorealistic Lighting With Path Tracing

When directing projects in physical production sites and studios, Danton said he was limited in what he could achieve with lighting — depending on resources, time of day and many other factors. Omniverse removes such creative limitations.

“I can now pre-visualize any of the shots I want to take, and on top of that, I can light them in Omniverse in a photorealistic way,” Danton said.

When he moves a light in Omniverse, the scene reacts exactly the way it would in the real world.

This ability, enabled by Omniverse’s RTX-powered real-time ray tracing and path tracing, is Danton’s favorite aspect of the platform. It lets him create photorealistic, cinematic sequences with “true feel of light,” which wasn’t possible before, he said.

In the Volvo car clip above, for example, the Omniverse lighting reacts on the car as it would in the forest, with physically accurate reflections and light bouncing off the windows.

“I’ve tried other software before, and Omniverse is far superior to anything else I have seen because of its real-time rendering and collaborative workflow capabilities,” Danton said.

Join in on the Creation

Creators across the world can experience NVIDIA Omniverse for free, and enterprise teams can use the platform for their projects.

Plus, join the #MadeInMachinima contest, running through June 27, for a chance to win the latest NVIDIA Studio laptop.

Learn more about Omniverse by watching GTC sessions on demand — featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity and Walt Disney Studios.

Follow Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Meet the Omnivore: Director of Photography Revs Up NVIDIA Omniverse to Create Sleek Car Demo appeared first on NVIDIA Blog.

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor.

But professors Artem Cherkasov and Olexandr Isayev were surprised to find that no recent academic papers provided a comprehensive, global research review of how deep learning and GPU-accelerated computing impact drug discovery.

In March, they published a paper in Nature to fill this gap, presenting an up-to-date review of the state of the art for GPU-accelerated drug discovery techniques.

Cherkasov, a professor in the department of urologic sciences at the University of British Columbia, and Isayev, an assistant professor of chemistry at Carnegie Mellon University, join NVIDIA AI Podcast host Noah Kravitz this week to discuss how GPUs can help democratize drug discovery.

In addition, the guests cover their inspiration and process for writing the paper, talk about NVIDIA technologies that are transforming the role of AI in drug discovery, and give tips for adopting new approaches to research.

The AI Podcast · Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery with Deep Learning – Ep. 172

You Might Also Like

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

AI of the Tiger: Conservation Biologist Jeremy Dertien on Real-Time Poaching Prevention

Fewer than 4,000 tigers remain in the wild due to a combination of poaching, habitat loss and environmental pressures. Clemson University’s Jeremy Dertien discusses using AI-equipped cameras to monitor poaching to protect a majority of the world’s remaining tiger populations.

Wild Things: 3D Reconstructions of Endangered Species with NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs appeared first on NVIDIA Blog.

3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D artist Jae Solina, who goes by the stage name JSFILMZ, steps In the NVIDIA Studio this week to share his unique 3D creative workflow in the making of Cyberpunk Short Film — a story shrouded in mystery with a tense exchange between two secretive contacts.

As an avid movie buff, JSFILMZ takes inspiration from innovative movie directors Christopher Nolan, David Fincher and Georce Lucas. He admires their abilities to combine technical skill with storytelling heightened by exciting plot twists.

The Cyberpunk Short Film setting displays stunning realism with ray-traced lighting, shadows and reflections — complemented by rich, vibrant colors.

Astonishingly, JSFILMZ created the film in just one day with the NVIDIA Omniverse platform for 3D design collaboration and world simulation, using the Omniverse Machinima app and the Reallusion iClone Connector. He alternated between systems that use an NVIDIA RTX A6000 GPU and a GeForce RTX 3070 Laptop GPU.

The #MadeinMachinima contest ends soon. Omniverse users can build and animate cinematic short stories with Omniverse Machinima for a chance to win RTX-accelerated NVIDIA Studio laptops. Entries are being accepted until Monday, June 27. 

An Omniverse Odyssey With Machinima 

JSFILMZ’s creative journey starts with scene building in Omniverse Machinima, plugging and moving background objects to create the futuristic cyberpunk diner. His RTX GPUs power Omniverse’s built-in RTX renderer to achieve fast, interactive movement within the viewport while preserving photorealistic detail. The reduction of distracting denoising allows JSFILMZ to focus on creating without having to wait for his scenes to render.

Ray-traced light reflects off the rim of the character’s glasses, achieving impressive photorealism.

Pulling assets from the NVIDIA MDL material library, JSFILMZ achieved peak realism with every surface, material and texture.

 

The artist then populated the scene with human character models downloaded from the Reallusion content store.

Automated facial animation in Reallusion iClone.

Vocal animation was generated in the Reallusion iClone Connector using the AccuLips feature. It simulates human speech behavior with each mouth shape, naturally taking on the qualities of those that precede or follow them. JSFILMZ simply uploads voiceover files from his actors, and the animations are automatically generated.

 

To capture animations while sitting, JSFILMZ turned to an Xsens Awinda starter body-motion-capture suit, acting out movements for both characters. Using the Xsens software, he processed, cleaned up and exported the visual effects data.

 

JSFILMZ integrated unique walking animations for each character by searching and selecting the perfect animation sequences in the Reallusion actorcore store. He returned to the iClone Connector to import and apply separate motion captures to the characters, completing animations for the scene.

The last 3D step was to adjust lighting. For tips on how to light in Omniverse, check out JSFILMZ’s live-streamed tutorial, which offers Omniverse know-how and his lighting technique.

“Cyberpunk Short Film” by 3D artist JSFILMZ.

According to JSFILMZ, adding and manipulating lights revealed another advantage of using Machinima: the ability to conveniently switch between real-time ray-traced mode for more fluid movement in the viewport and the interactive path-traced mode for the most accurate, detailed view.

He then exported final renders with ray tracing using the Omniverse RTX Renderer, which is powered by NVIDIA RTX or GeForce RTX GPUs.

Working with multiple 3D applications connected by Omniverse saved JSFILMZ countless hours of rendering, downloading files, converting file types, reuploading and more. “It’s so crazy that I can do all this, all at home,” he said.

Completing Cyberpunk Short Film required editing and color correction in DaVinci Resolve.

The NVIDIA hardware encoder enables speedy exports.

Color grading, video editing and color scope features deployed by JSFILMZ are all accelerated with his GPU, allowing for quick edits. And the NVIDIA hardware encoder and decoder makes the GPU-accelerated export very fast.

And with that, Cyberpunk Short Film was ready for viewing.

3D artists can benefit from JSFILMZ’s NVIDIA Omniverse tutorial YouTube playlist. It’s an extensive overview of the Omniverse platform for creators, covering the basics from installation and set up to in-app features such as lighting, rendering and animating.

3D artist and YouTube content creator Jae Solina, aka JSFILMZ.

JSFILMZ teaches 3D creative workflows specializing in NVIDIA Omniverse and Unreal Engine 5 on his YouTube channel and via Udemy courses.

Learn more about NVIDIA Omniverse, including tips, tricks and more on the Omniverse YouTube channel. For additional support, explore the Omniverse forums or join the Discord server to chat with the community. Check out the Omniverse Twitter, Instagram and Medium page to stay up to date.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects

Jazz is all about improvisation — and NVIDIA is paying tribute to the genre with AI research that could one day enable graphics creators to improvise with 3D objects created in the time it takes to hold a jam session.

The method, NVIDIA 3D MoMa, could empower architects, designers, concept artists and game developers to quickly import an object into a graphics engine to start working with it, modifying scale, changing the material or experimenting with different lighting effects.

NVIDIA Research showcased this technology in a video celebrating jazz and its birthplace, New Orleans, where the paper behind 3D MoMa will be presented this week at the Conference on Computer Vision and Pattern Recognition.

Extracting 3D Objects From 2D Images

Inverse rendering, a technique to reconstruct a series of still photos into a 3D model of an object or scene, “has long been a holy grail unifying computer vision and computer graphics,” said David Luebke, vice president of graphics research at NVIDIA.

“By formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit and extend without limitation in existing tools,” he said.

To be most useful for an artist or engineer, a 3D object should be in a form that can be dropped into widely used tools such as game engines, 3D modelers and film renderers. That form is a triangle mesh with textured materials, the common language used by such 3D tools.

Triangle meshes are the underlying frames used to define shapes in 3D graphics and modeling.

Game studios and other creators would traditionally create 3D objects like these with complex photogrammetry techniques that require significant time and manual effort. Recent work in neural radiance fields can rapidly generate a 3D representation of an object or scene, but not in a triangle mesh format that can be easily edited.

NVIDIA 3D MoMa generates triangle mesh models within an hour on a single NVIDIA Tensor Core GPU. The pipeline’s output is directly compatible with the 3D graphics engines and modeling tools that creators already use.

The pipeline’s reconstruction includes three features: a 3D mesh model, materials and lighting. The mesh is like a papier-mâché model of a 3D shape built from triangles. With it, developers can modify an object to fit their creative vision. Materials are 2D textures overlaid on the 3D meshes like a skin. And NVIDIA 3D MoMa’s estimate of how the scene is lit allows creators to later modify the lighting on the objects.

Tuning Instruments for Virtual Jazz Band

To showcase the capabilities of NVIDIA 3D MoMa, NVIDIA’s research and creative teams started by collecting around 100 images each of five jazz band instruments — a trumpet, trombone, saxophone, drum set and clarinet — from different angles.

NVIDIA 3D MoMa reconstructed these 2D images into 3D representations of each instrument, represented as meshes. The NVIDIA team then took the instruments out of their original scenes and imported them into the NVIDIA Omniverse 3D simulation platform to edit.

In any traditional graphics engine, creators can easily swap out the material of a shape generated by NVIDIA 3D MoMa, as if dressing the mesh in different outfits. The team did this with the trumpet model, for example, instantly converting its original plastic to gold, marble, wood or cork.

Creators can then place the newly edited objects into any virtual scene. The NVIDIA team dropped the instruments into a Cornell box, a classic graphics test for rendering quality. They demonstrated that the virtual instruments react to light just as they would in the physical world, with the shiny brass instruments reflecting brightly, and the matte drum skins absorbing light.

These new objects, generated through inverse rendering, can be used as building blocks for a complex animated scene — showcased in the video’s finale as a virtual jazz band.

The paper behind NVIDIA 3D MoMa will be presented in a session at CVPR on June 22 at 1:30 p.m. Central time. It’s one of 38 papers with NVIDIA authors at the conference. Learn more about NVIDIA Research at CVPR.

The post AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects appeared first on NVIDIA Blog.

NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse

The metaverse is the next big step in the evolution of the internet — the 3D web — which presents a major opportunity for every industry from entertainment to automotive to manufacturing, robotics and beyond.

That’s why NVIDIA is joining our partners in the Metaverse Standards Forum, an open venue for all interested parties to discuss and debate how best to build the foundations of the metaverse.

From a 2D to a 3D Internet 

The early internet of the ’70s and ’80s was accessed purely through text-based interfaces, UNIX shells and consoles. The ’90s introduced the World Wide Web, which made the internet accessible to millions by providing a more natural and intuitive interface with images and text combined into 2D worlds in the form of web pages.

The metaverse that is coming into existence is a 3D spatial overlay of the internet. It continues the trend of making the internet more accessible and more natural for humans by making the interface to the internet indistinguishable from our interface to the real world.

The 3D computer graphics and simulation technologies developed over the past three decades in CAD/CAM, visual effects and video games, combined with the computing power now available, have converged to a point where we can now start building such an interface.

A Place for Both Work and Play

For most people, the term metaverse primarily evokes thoughts of gaming or socializing. They’ll definitely be big, important use cases of the metaverse, but just like with the internet, it won’t be limited to them.

We use the internet for far more than play. Companies and industries run on the internet; it’s part of their essential infrastructure. We believe the same will be true for the emerging metaverse.

For example, retailers are opening virtual shops to sell real and virtual goods. Researchers are using digital twins to design and simulate fusion power plants.

BMW Group is developing a digital twin of an entire factory to more rapidly design and operate efficient and safe factories. NVIDIA is building an AI supercomputer to power a digital twin of the Earth to help researchers study and solve climate change.

A Lesson From the Web

The key to the success of the web from the very start in 1993 was the introduction of a standard and open way of describing a web page — HyperText Markup Language, or HTML. Without HTML’s adoption, we would’ve had disconnected islands on the web, each only linking within themselves.

Fortunately, the creators of the early web and internet understood that open standards — particularly for data formats — were accelerators of growth and a network effect.

The metaverse needs an equivalent to HTML to describe interlinked 3D worlds in glorious detail. Moving between 3D worlds using various tools, viewers and browsers must be seamless and consistent.

The solution is Pixar’s Universal Scene Description (USD) — an open and extensible format, library and composition engine.

USD is one of many of the building blocks we’ll need to build the metaverse. Another is glTF, a 3D transmission format developed within Khronos Group. We see USD and glTF as compatible technologies and hope to see them coevolve as such.

A Constellation of Standards

Neil Trevett, vice president of developer ecosystems at NVIDIA and the president of The Khronos Group, the forum’s host, says the metaverse will require a constellation of standards.

The forum won’t set them, but it’ll be a place where designers and users can learn about and try ones they want to use and identify any that are missing or need to be expanded.

We’re thrilled to see the formation of the Metaverse Standards Forum — a free and open venue where people from every domain can gather to contribute to the exciting new era of the internet: the metaverse!

The post NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse appeared first on NVIDIA Blog.

NVIDIA Accelerates Open Data Center Innovation

NVIDIA today became a founding member of the Linux Foundation’s Open Programmable Infrastructure (OPI) project, while making its NVIDIA DOCA networking software APIs widely available to foster innovation in the data center.

Businesses are embracing open data centers, which require applications and services that are easily integrated with other solutions for simplified, lower-cost and sustainable management. Moving to open NVIDIA DOCA will help develop and nurture broad and vibrant DPU ecosystems and power unprecedented data center transformation.

The OPI project aims to create a community-driven, standards-based, open ecosystem for accelerating networking and other data center infrastructure tasks using DPUs.

DOCA includes drivers, libraries, services, documentation, sample applications and management tools to speed up and simplify the development and performance of applications. It allows for flexibility and portability for BlueField applications written using accelerated drivers or low-level libraries, such as DPDK, SPDK, Open vSwitch or Open SSL. We plan to continue this support. As part of OPI, developers will be able to create a common programming layer to support many of these open drivers and libraries  with DPU acceleration.

DOCA library APIs are already publicly available and documented for developers. Open licensing of these APIs will ensure that applications developed using DOCA will support BlueField DPUs as well as those from other providers.

DOCA has always been built on an open foundation. Now NVIDIA is opening the APIs to the DOCA libraries and plans to add OPI support.

Expanding Use of DPUs

AI, containers and composable infrastructure are increasingly important for enterprise and cloud data centers. This is driving the use of DPUs in servers to support software-defined, hardware-accelerated networking, east-west traffic and zero-trust security.

Only the widespread deployment of DPUs such as NVIDIA BlueField can support the ability to offload, accelerate and isolate data center workloads, including networking, storage, security and DevOps management.

NVIDIA’s history of open innovation over the decades includes engaging with leading consortiums, participating in standards committees and contributing to a range of open source software and communities.

We contribute frequently to open source and open-license projects and software such as the Linux kernel, DPDK, SPDK, NVMe over Fabrics, FreeBSD, Apache Spark, Free Range Routing, SONiC, Open Compute Project and other areas covering networking, virtualization, containers, AI, data science and data encryption.

NVIDIA is often among the top three code contributors to many releases of Linux and DPDK. And we’ve historically included an open source version of our networking drivers in the Linux kernel.

With OPI, customers, ISVs, infrastructure appliance vendors and systems integrators will be able to create applications for BlueField DPUs using DOCA to gain the best possible performance and easiest developer experience for accelerated data center infrastructure.

The post NVIDIA Accelerates Open Data Center Innovation appeared first on NVIDIA Blog.

From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine

NVIDIA is collaborating with clinical organizations across Europe to bring AI to the point of care, bolstering clinical pathways with efficiency gains and new data dimensions that can be included in medical decision-making processes.

The University Hospital Essen, in northwestern Germany, is one such organization taking machine learning from the bits to the bedside — using NVIDIA technology and AI to build smart hospitals of the future.

Jens Kleesiek and Felix Nensa, professors at the School of Medicine of the University of Duisburg Essen, are part of a four-person team leading the research groups that established the Institute for Artificial Intelligence in Medicine (IKIM). The technology developed by IKIM is integrated with the IT infrastructure of University Hospital Essen.

IKIM hosts a data annotation lab, overseen by a team of board-certified radiologists, that accelerates the labeling of anatomic structures in medical images using MONAI, an open-source, PyTorch-based framework for building, training, labeling and deploying AI models for healthcare imaging.

MONAI was created by NVIDIA in collaboration with over a dozen leading clinical and research organizations, including King’s College London.

IKIM researchers also use self-supervised learning to pretrain AI models that generate high-quality labels for the hospital’s CT scans, MRIs and more.

Additionally, the IKIM team has developed a smart hospital information platform, or SHIP, an AI-based central healthcare data integration platform and deployment engine. The platform is used by researchers and clinicians to conduct real-time analysis of the slew of data in university hospitals — including medical imaging, radiology reports, clinic notes and patient interviews.

SHIP can, for example, flag an abnormality on a radiology report and notify physicians via real-time push notifications, enabling quicker diagnoses and treatments for patients. The AI can also pinpoint data-driven associations between healthcare metrics like genetic traits and patient outcomes.

“We want to solve real-world problems and bring the solutions right into the clinics,” Kleesiek said. “The SHIP framework is capable of delivering deep learning algorithms that analyze data straight to the clinicians who are at the point of care.”

Plus, increased workflow efficiency — enabled by AI — means increased sustainability within hospitals.

Making Hospitals Smarter

Nensa says his hospital currently has close to 500 IT systems, including those for hospital information, laboratories and radiology. Each consists of critical patient information that’s interrelated — but data from disparate systems can be difficult to connect or draw machine learning-based insights from.

SHIP connects the data from all such systems by automatically translating it into a description standard called fast healthcare interoperability resources, or FHIR, which is commonly used in medicine to exchange electronic health records. SHIP currently encompasses more than 1.2 billion FHIR.

Once converted to FHIR, the information can be easily accessed by data scientists, researchers and clinicians for real-time AI training and analysis based on NVIDIA GPUs and DGX A100 systems. This makes it possible for labor-intensive tasks, such as liver volumetry prior to living donor liver transplantation or bone age estimation in children, to be performed fully automatically in the background, instead of requiring a half-hour of manual work by a radiologist.

“The more artificial intelligence is at work in a hospital, the more patients can enjoy human intelligence,” Nensa said. “As AI provides doctors and nurses relief from repetitive tasks like data retrieval and annotation, the medical professionals can focus on what they really want to do, which is to be there and care for their patients.”

NVIDIA DGX A100 systems power IKIM’s AI training and inference. NVIDIA Triton Inference Server enables fast and scalable concurrent serving of AI models within the clinic.

The IKIM team also uses NVIDIA FLARE, an open-source platform for federated learning, which allows data scientists to develop generalizable and robust AI models while maintaining patient privacy.

Smarter Equals Greener

In addition to reducing physician workload and increasing time for patient care, AI in hospitals boosts sustainability efforts.

As a highly specialized medical center, the University Hospital Essen must be available year-round for reliable patient treatment, with 24-hour operation times. In this way, patient-oriented, cutting-edge medicine is traditionally associated with a high consumption of energy.

SHIP helps hospitals increase efficiency, automating tasks and optimizing processes to reduce friction in the workflow — which saves energy. According to Kleesiek, IKIM reuses the energy emitted by GPUs in the data center, which also helps to make the University Hospital Essen greener.

“NVIDIA is providing all of the layers for us to get the most out of the technology, from software and hardware to training led by expert engineers,” Nensa said.

In April, NVIDIA experts hosted a workshop at IKIM, featuring lectures and hands-on training on GPU-accelerated deep learning, data science and AI in medicine. The workshop led IKIM to kickstart additional projects using AI for medicine — including a research contribution to MONAI.

In addition, IKIM is building SmartWard technology to provide an end-to-end AI-powered patient experience in hospitals, from service robots in waiting areas to automated discharge reports.

For the SmartWard project, the IKIM team is considering integrating the NVIDIA Clara Holoscan platform for medical device AI computing.

Subscribe to NVIDIA healthcare news and watch IKIM’s NVIDIA GTC session on demand.

Feature image courtesy of University of Duisburg-Essen.

The post From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine appeared first on NVIDIA Blog.