Top Israel Medical Center Partners with AI Startups to Help Detect Brain Bleeds, Other Critical Cases

Israel’s largest private medical center is working with startups and researchers to bring potentially life-saving AI solutions to real-world healthcare workflows.

With more than 1.5 million patients across eight medical centers, Assuta Medical Centers conduct over 100,000 surgeries, 800,000 imaging tests and hundreds of thousands of other health diagnostics and treatments each year. These create huge amounts of de-identified data that Assuta is securely sharing with more than 20 startups through its innovation arm, RISE, launched last year working in collaboration with NVIDIA.

One of the startups, Aidoc, is helping Assuta alert imaging technicians with AI-based insights of possible bleeding in the brain and other critical conditions in a patient’s scan within minutes. Another, Rhino Health, is using federated learning powered by NVIDIA FLARE to make AI development on diverse medical datasets from hospitals across the globe more accessible to Assuta’s collaborators.

Both companies are members of NVIDIA Inception, a global program designed to support cutting-edge startups with go-to-market support, expertise and technology.

“We’re building a hub to serve innovators with the infrastructure they need to develop, test and deploy new AI technology for image analysis and other data-heavy computations in radiology, pathology, genomics and more,” said Daniel Rabina, director of innovation at RISE. “We want to make collaboration with companies, research institutes, hospitals and universities possible while maintaining patient data privacy.”

To support AI development, testing and deployment, Assuta has installed NVIDIA DGX A100 systems on premises and adopted the NVIDIA Clara Holoscan platform, plus software libraries including MONAI for healthcare imaging and NVIDIA FLARE for federated learning.

NVIDIA and RISE are collaborating on RISE with US, a program built to introduce selected Israeli entrepreneurs and early-stage startups working on digital and computational health solutions to the U.S. market. Applications to join the program are open until August 28.

Aidoc Flags Urgent Cases for Radiologist Review

Aidoc, which is New York-based with a research branch in Israel, has developed FDA-cleared AI solutions to flag acute conditions including brain hemorrhages, pulmonary embolisms and strokes from imaging scans.

Founded in 2016 by a group of veterans from the Israel Defense Forces, the startup has deployed its AI to analyze millions of cases across more than 1,000 medical facilities, primarily in the U.S., Europe and Israel.

Its algorithms integrate seamlessly with the PACS imaging workflow used by radiologists worldwide, working behind the scenes to analyze each imaging study and flag urgent findings — bringing potentially critical cases to the radiologist’s attention for review.

Aidoc’s tools can help address the growing shortage of radiologists globally by reducing the time a radiologist needs to spend on each case, enabling care for more patients. And by pushing potentially critical cases to the top of a radiologist’s pile, the AI can help clinicians catch important findings sooner, improving patient outcomes.

The startup uses NVIDIA Tensor Core GPUs in the cloud through AWS for AI training and inference. Adopting NVIDIA GPUs helped reduce model training time from days to a couple hours.

Immediate Impact at Assuta Medical Centers 

Assuta is a private chain of hospitals that provides elective care — typically dealing with routine screenings rather than emergency room patients — but it adopted Aidoc’s solution to help imaging technicians spot critical cases that need urgent attention among its roughly 200,000 CT tests conducted annually.

When a radiology scan isn’t urgent, it may take a couple days for a doctor to review the case. Aidoc can shrink this time to minutes by identifying concerning cases as soon as the scans are captured by radiology staff.

At Assuta, urgent findings are typically found among cancer patients, or people who have recently undergone surgery and need follow-up scans. The healthcare organization is using Aidoc’s AI tools to detect intracranial hemorrhages and two kinds of pulmonary embolism.

“We saw the impact right away,” said Dr. Michal Guindy, head of medical imaging and head of RISE at Assuta. “Just a couple days after installing Aidoc at Assuta, a patient came in for a follow-up scan after a brain procedure and had an intracranial hemorrhage. Because Aidoc alerted the imaging technician to flag it for further review, our doctors were able to call the patient while they were on their way home and immediately redirect them to the hospital for treatment.”

Rhino Health Fosters Collaboration With Federated Learning

In addition to deploying AI models in full-scale, real-world settings, Assuta is supporting innovators who are developing, testing or validating new medical AI solutions by sharing the healthcare organization’s data, while also using federated learning through Rhino Health.

Assuta has millions of radiology cases digitized — a desirable resource for researchers and startups looking for robust, diverse datasets to train or validate their AI models. But because of data privacy protection, it’s important that patient information stays safely within the firewall of medical centers like Assuta.

“Data diversity is necessary to develop AI models meant for the use of medical teams. Without optimal computing resources, it would be extremely difficult to use our data and make the magic happen,” said Rabina. “That’s why we need federated learning enabled by both NVIDIA and Rhino Health.”

Federated learning allows companies, healthcare institutions and universities to work together by training and validating AI models across multiple organizations’ datasets while maintaining each organization’s data privacy. Rhino Health provides a neutral platform — available through the NVIDIA AI Enterprise software suite — that enables secure collaboration, powered by NVIDIA A100 GPUs in the cloud and the NVIDIA FLARE federated learning framework.

With Rhino Health, Assuta aims to help its collaborators develop AI models across hospitals internationally, resulting in more generalizable algorithms that perform more accurately across different patient populations.

Register for NVIDIA GTC, running online Sept. 19-22, to hear more from leaders in healthcare AI.

Subscribe to NVIDIA healthcare news and watch on demand as Assuta, Aidoc and Rhino Health speak at an GTC panel.

The post Top Israel Medical Center Partners with AI Startups to Help Detect Brain Bleeds, Other Critical Cases appeared first on NVIDIA Blog.

GFN Thursday Brings Thunder to the Cloud With ‘Rumbleverse’ Arriving on GeForce NOW

It’s time to rumble in Grapital City with Rumbleverse launching today on GeForce NOW.

Punch your way into the all-new, free-to-play Brawler Royale from Iron Galaxy Studios and Epic Games Publishing, streaming from the cloud to nearly all devices.

That means gamers can tackle, uppercut, body slam and more from any GeForce NOW-compatible device, including mobile, at full PC quality. And GeForce NOW is the only way for Mac gamers to join the fray.

Plus, jump over to the list of seven new titles in the GeForce NOW library. Members will also notice a new “GET” button that makes accessing titles they’re interested in more seamless, so they can get right into gaming.

Drop in, Throw Down!

Drop into the chaotic world of Grapital City, where players must brawl it out to become the champion. Rumblers can create their own fighters using hundreds of unique items to stand out in the crowd — a 40-person melee crowd, to be exact.

Or, maybe, the strategy isn’t to stand out. With a massive city to run around in — including skyscrapers and an urban landscape — there are plenty of places to hide, duke it out and find crates full of weapons, like baseball bats or a stop sign, as well as upgrades to level up with.

Players can explore a ton of moves to take down other rumblers and discover perks with each round to come up with devious new ways to be the last person standing.

To learn the ways of the Rumble, Playground mode is available to explore Grapital City, in addition to various training modules scattered across the map. Players can also form a tag team and fight back to back with a friend.

Rumbleverse is free to play, so getting started is easy when paired with a free GeForce NOW membership. Thanks to the cloud, members don’t even have to wait for the game to download.

Level up to a RTX 3080 membership to stream at up to 1440p and 120 frames per second at ultra-low latency, plus dedicated access to RTX 3080 servers and eight-hour gaming sessions. It’s the best way to get the upper hand when duking it out with fellow rumblers.

Smash the ‘GET’ Button

The GeForce NOW apps on PC, Mac, iOS and browser now feature a “GET” button to link members directly to the digital store of their choice, making it even easier to quickly purchase titles or access free-to-play ones. And without having to wait for game downloads due to cloud streaming, members can dive into their new games as quickly as possible.

Get what you want, when you want it.

Hunted, the newest season of Apex Legends, is also available for members to stream. Hunt or be hunted in the cloud with a new Legend Vantage, an updated map of Kings Canyon, increased level cap and more in Apex Legends: Hunted.

Hunt or be hunted in the cloud in the newest season of “Apex Legends”

Plus, make sure to check out the seven new titles being added this week:

Tyrant’s Blessing (New Release on Steam and Epic Games Store)
Farthest Frontier (New Release on Steam)
Arcade Paradise (Steam and Epic Games Store, Aug. 11)
Cult of the Lamb (Steam, Aug. 11)
Rumbleverse (Epic Games Store, Aug. 11)
Warhammer 40,000: Mechanicus (Epic Games Store, Aug. 11)
Infinite Lagrange (Steam)

How do you plan to be a champion this week? We’ve got a couple of options for you to choose from. Let us know your answer on Twitter or in the comments below.

what’s a better melee weapon:

— NVIDIA GeForce NOW (@NVIDIAGFN) August 10, 2022

The post GFN Thursday Brings Thunder to the Cloud With ‘Rumbleverse’ Arriving on GeForce NOW appeared first on NVIDIA Blog.

Design in the Age of Digital Twins: A Conversation With Graphics-Pioneer Donald Greenberg

Asked about the future of design, Donald Greenberg holds up a model of a human aorta.

“After my son became an intravascular heart surgeon at the Cleveland Clinic, he hired one of my students to use CAT scans and create digital 3D models of an aortic aneurysm,” said the computer graphics pioneer in a video interview from his office at Cornell University.

The models enabled custom stents that fit so well patients could leave the hospital soon after they’re inserted. It’s one example Greenberg gives of how computer graphics are becoming part of every human enterprise.

A Whole New Chapter

Expanding the frontier, he’s creating new tools for an architecture design course based on today’s capabilities for building realistic 3D worlds and digital twins. It will define a holistic process so everyone from engineers to city planners can participate in a design.

The courseware is still at the concept stage, but his passion for it is palpable. “This is my next big project, and I’m very excited about it,” said the computer graphics professor of the work, which is sponsored by NVIDIA.

“NVIDIA is superb at the hardware and the software algorithms, and for a long time its biggest advantage is in how it fits them together,” he said.

Greenberg imagines a design process open enough to include urban planners concerned with affordable housing, environmental activists mindful of sustainable living and neighbors who want to know the impact a new structure might have on their access to sunlight.

“I want to put people from different disciplines in the same foxhole so they can see things from different points of view at the same time,” said Greenberg, whose courses have spanned Cornell’s architecture, art, computer science, engineering and business departments.

Teaching With Omniverse

A multidisciplinary approach has fueled Greenberg’s work since 1968, when he started teaching at both Cornell’s colleges of engineering and architecture. And he’s always been rooted in the latest technology.

Today, that means inspiring designers and construction experts to enter the virtual worlds built with photorealistic graphics, simulations and AI in NVIDIA Omniverse.

“Omniverse expands, to multiple domains, the work done with Universal Scene Description, developed by some of the brightest graphics people at places like Pixar — it’s a superb environment for modern collaboration,” he said.

It’s a capability that couldn’t have existed without the million-X advances in computing Greenberg has witnessed in his 54-year career.

He recalls his excitement in 1979 when he bought a VAX-11/780 minicomputer, his first system capable of a million instructions per second. In one of his many SIGGRAPH talks, he said designers would someday have personal workstations capable of 100 MIPS.

Seeing Million-X Advances

The prediction proved almost embarrassingly conservative.

“Now I have a machine that’s 1012 times more powerful than my first computer — I feel like a surfer riding a tidal wave, and that’s one reason I’m still teaching,” he said.

Greenberg with some of his students trying out the latest design tools.

It’s a long way from the system at General Electric’s Visual Simulation Laboratory in Syracuse, New York, where in the late 1960s he programmed on punch cards to help create one of the first videos generated solely with computer graphics. The 18-minute animation wowed audiences and took him and 14 of his architecture students two years to create.

NASA used the same GE system to train astronauts how to dock the Apollo module with the lunar lander. And the space agency was one of the early adopters of digital twins, he notes, a fact that saved the lives of the Apollo 13 crew after a system malfunction two days into their trip to the moon.

From Sketches to Digital Twins

For Greenberg, it all comes down to the power of computer graphics.

“I love to draw, 99% of intellectual intake comes through our eyes and my recent projects are about how to go from a sketch or idea to a digital twin,” he said.

Among his few regrets, he said he’ll miss attending SIGGRAPH in person this year.

“It became an academic home for my closest friends and collaborators, a community of mavericks and the only place I found creative people with both huge imaginations and great technical skills, but it’s hard to travel at my age,” said the 88-year-old, whose pioneering continues in front of his computer screen.

“I have a whole bunch of stuff I’m working on that I call techniques in search of a problem, like trying to model how the retina sees an image — I’m just getting started on that one,” he said.


Learn More About Omniverse at SIGGRAPH

Anyone can get started working on digital twins with Omniverse by taking a free, self-paced online course at the NVIDIA Deep Learning Institute. And individuals can download Omniverse free.

Educators can request early access to the “Graphics & Omniverse” teaching kit. SIGGRAPH attendees can join a session on “The Metaverse for International Educators” or one of four hands-on training labs on Omniverse.

To learn more, watch NVIDIA’s CEO Jensen Huang and others in a special address at SIGGRAPH on-demand.

The post Design in the Age of Digital Twins: A Conversation With Graphics-Pioneer Donald Greenberg appeared first on NVIDIA Blog.

AI Flying Off the Shelves: Restocking Robot Rolls Out to Hundreds of Japanese Convenience Stores

Tokyo-based startup Telexistence this week announced it will deploy NVIDIA AI-powered robots to restock shelves at hundreds of FamilyMart convenience stores in Japan.

There are 56,000 convenience stores in Japan — the third-highest density worldwide. Around 16,000 of them are run by FamilyMart. Telexistence aims to save time for these stores by offloading repetitive tasks like refilling shelves of beverages to a robot, allowing retail staff to tackle more complex tasks like interacting with customers.

It’s just one example of what can be done by Telexistence’s robots, which run on the NVIDIA Jetson edge AI and robotics platform. The company is also developing AI-based systems for warehouse logistics with robots that sort and pick packages.

“We want to deploy robots to industries that support humans’ everyday life,” said Jin Tomioka, CEO of Telexistence. “The first space we’re tackling this is through convenience stores — a huge network that supports daily life, especially in Japan, but is facing a labor shortage.”

The company, founded in 2017, next plans to expand to convenience stores in the U.S., which is also plagued with a labor shortage in the retail industry — and where more than half of consumers say they visit one of the country’s 150,000 convenience stores at least once a month.

Telexistence Robots Stock Up at FamilyMart

Telexistence will begin deploying its restocking robots, called TX SCARA, to 300 FamilyMart stores in August — and aims to bring the autonomous machines to additional FamilyMart locations, as well as other major convenience store chains, in the coming years.

“Staff members spend a lot of time in the back room of the store, restocking shelves, instead of out with customers,” said Tomioka. “Robotics-as-a-service can allow staff to spend more time with customers.”

TX SCARA runs on a track and includes multiple cameras to scan each shelf, using AI to  identify drinks that are running low and plan a path to restock them. The AI system can successfully restock beverages automatically more than 98% of the time.

In the rare cases that the robot misjudges the placement of the beverage or a drink topples over, there’s no need for the retail staff to drop their task to get the robot back up and running. Instead, Telexistence has remote operators on standby, who can quickly address the situation by taking manual control through a VR system that uses NVIDIA GPUs for video streaming.

Telexistence estimates that a busy convenience store needs to restock more than 1,000 beverages a day. TX SCARA’s cloud system maintains a database of product sales based on the name, date, time and number of items stocked by the robots during operation. This allows the AI to prioritize which items to restock first based on past sales data.

Achieving Edge AI With NVIDIA Jetson 

TX SCARA has multiple AI models under the hood. An object-detection model identifies the types of drinks in a store to determine which one belongs on which shelf. It’s combined with another model that helps detect the movement of the robot’s arm, so it can pick up a drink and accurately place it on the shelf between other products. A third is for anomaly detection: recognizing if a drink has fallen over or off the shelf. One more detects which drinks are running low in each display area.

The Telexistence team used custom pre-trained neural networks as their base models, adding synthetic and annotated real-world data to fine-tune the neural networks for their application. Using a simulation environment to create more than 80,000 synthetic images helped the team augment their dataset so the robot could learn to detect drinks in any color, texture or lighting environment.

For AI model training, the team relied on an NVIDIA DGX Station. The robot itself uses two NVIDIA Jetson embedded modules: the NVIDIA Jetson AGX Xavier for AI processing at the edge, and the NVIDIA Jetson TX2 module to transmit video streaming data.

On the software side, the team uses the NVIDIA JetPack SDK for edge AI and the NVIDIA TensorRT SDK for high-performance inference.

“Without TensorRT, our models wouldn’t run fast enough to detect objects in the store efficiently,” said Pavel Savkin, chief robotics automation officer at Telexistence.

Telexistence further optimized its AI models using half-precision (FP16) instead of single-precision floating-point format (FP32).

Learn more about the latest in AI and robotics at NVIDIA GTC, running online Sept. 19-22. Registration is free.

The post AI Flying Off the Shelves: Restocking Robot Rolls Out to Hundreds of Japanese Convenience Stores appeared first on NVIDIA Blog.

Future of Creativity on Display ‘In the NVIDIA Studio’ During SIGGRAPH Special Address

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

A glimpse into the future of AI-infused virtual worlds was on display at SIGGRAPH — the world’s largest gathering of computer graphics experts — as NVIDIA founder and CEO Jensen Huang put the finishing touches on the company’s special address.

Announcements included a host of updates to a pillar of the NVIDIA Studio software suite: NVIDIA Omniverse, a platform for 3D design collaboration and world simulation. New features and improvements to apps including Create, Machinima, Audio2Face and Nucleus will help 3D artists build virtual worlds, digital twins and avatars for the metaverse.

Each month, NVIDIA Studio Driver releases provide artists, creators and 3D developers with the best performance and reliability when working with creative applications. Available now, the August NVIDIA Studio Driver gives creators peak reliability for using Omniverse and their favorite creative apps.

Plus, this week’s featured In the NVIDIA Studio artist, Simon Lavit, exhibits his mastery of Omniverse as the winner of the #MadeInMachinima contest. The 3D artist showcases the creative workflow for his victorious short film, Painting the Astronaut.

Omniverse Expands

NVIDIA Omniverse — an open platform based on Universal Scene Description (USD) for building and connecting virtual worlds — just received a significant upgrade.

Omniverse Apps — including Create 2022.2 — received a major PhysX update with soft-body simulation, particle-cloth simulation and soft-contact models, delivering more realism to physically accurate virtual worlds. Added OmniLive workflows enable artists more freedom through a new collaboration interface for non-destructive USD workflows.

Omniverse users can now add animations and emotions with the Audio2Face app.

Audio2Face 2022.1 is now available in beta, including major updates that enable AI-powered emotion control and full facial animation, delivering more realism than ever. Users can now direct emotion over time, as well as mix key emotions like joy, amazement, anger and sadness. The AI can also direct eye, teeth and tongue motion, in addition to the avatar’s skin, providing an even more complete facial-animation solution.

Learn additional details on these updates and more.

Winning the #MadeInMachinima Contest

Since he first held a pen, Simon Lavit has been an artist. Now, Lavit adds Omniverse Machinima to the list of creative tools he’s mastered, as the winner of the #MadeInMachinima contest.

His entry, Painting the Astronaut, was selected by an esteemed panel of judges that included numerous creative experts.

Powered by a GeForce RTX 3090 GPU, Lavit’s creative workflow showcases the breadth and interoperability of Omniverse, its Apps and Connectors. He used lighting and scene setting to establish the short film’s changing mood, helping audiences understand the story’s progression. Its introduction, for example, is bright and clear. The film then gets darker, conveying the idea of the unknown as the character starts his journey.

The lighting for “Painting the Astronaut” helps guide the story, with 3D assets from the Omniverse library.

Lavit storyboarded on paper before starting his digital process with the Machinima and Omniverse Create apps. He quickly turned to NVIDIA’s built-in 3D asset library, filled with free content from Mount & Blade II: Bannerlord, Mechwarrior 5: Mercenaries, Squad and more – to populate the scene.

The 3D model for the spaceship was created in Autodesk Maya within Omniverse.

Then, Lavit used Autodesk Maya to create 3D models for some of his hero assets — like the protagonist Sol’s spaceship. The Maya Omniverse Connector allowed him to visualize scenes within Omniverse Create. He also benefited from RTX-accelerated ray tracing and AI denoising in Maya, resulting in highly interactive and photorealistic renders.

Next, Lavit textured the models in Adobe Substance 3D, which also has an Omniverse Connector. Substance 3D uses NVIDIA Iray rendering, including for textures and substances. It also features RTX-accelerated light- and ambient-occlusion baking, which optimizes assets in seconds.

Lavit then returned to Machinima for final layout, animation and render. The result was composited using Adobe After Effects, with an extra layer of effects and music. What turned into the contest-winning piece of art ultimately was “a pretty simple workflow to keep the complexity to a minimum,” Lavit said.

”Painting the Astronaut” netted Lavit a GeForce RTX 3080 Ti-powered ASUS ProArt StudioBook 16.

To power his future creativity from anywhere, Lavit won an ASUS ProArt StudioBook 16. This NVIDIA Studio laptop packs top-of-the-line technology into a device that enables users to work on the go with world-class power from a GeForce RTX 3080 Ti Laptop GPU and beautiful 4K display.

3D Artist and Omniverse #MadeInMachinima contest winner Simon Lavit.

Lavit, born in France and now based in the U.S., sees every project as an adventure. Living in a different country from where he was born changed his vision of art, he said. Lavit regularly finds inspiration from the French graphic novel series, The Incal, which is written by Alejandro Jodorowsky and illustrated by renowned cartoonist Jean Giraud, aka Mœbius.

Made the Grade

The next generation of creative professionals is heading back to campus. Choosing the right NVIDIA Studio laptop can be tricky, but students can use this guide to find the perfect tool to power their creativity — like the Lenovo Slim 7i Pro X, an NVIDIA Studio laptop now available with a GeForce RTX 3050 Laptop GPU.

While the #MadeInMachinima contest has wrapped, creators can graduate to an NVIDIA RTX A6000 GPU in the #ExtendOmniverse contest, running through Friday, Aug. 19. Perform something akin to magic by making your own NVIDIA Omniverse Extension for a chance to win an RTX A6000 or GeForce RTX 3090 Ti GPU. Winners will be announced in September at GTC.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Future of Creativity on Display ‘In the NVIDIA Studio’ During SIGGRAPH Special Address appeared first on NVIDIA Blog.

At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution

In a swift, eye-popping special address at SIGGRAPH, NVIDIA execs described the forces driving the next era in graphics, and the company’s expanding range of tools to accelerate them.

“The combination of AI and computer graphics will power the metaverse, the next evolution of the internet,” said Jensen Huang, founder and CEO of NVIDIA, kicking off the 45-minute talk.

It will be home to connected virtual worlds and digital twins, a place for real work as well as play. And, Huang said, it will be vibrant with what will become one of the most popular forms of robots: digital human avatars.

With 45 demos and slides, five NVIDIA speakers announced:

A new platform for creating avatars, NVIDIA Omniverse Avatar Cloud Engine (ACE).
Plans to build out Universal Scene Description (USD), the language of the metaverse.
Major extensions to NVIDIA Omniverse, the computing platform for creating virtual worlds and digital twins.
Tools to supercharge graphics workflows with machine learning.

“The announcements we made today further advance the metaverse, a new computing platform with new programming models, new architectures and new standards,” he said.

Metaverse applications are already here.

Huang pointed to consumers trying out virtual 3D products with augmented reality, telcos creating digital twins of their radio networks to optimize and deploy radio towers and companies creating digital twins of warehouses and factories to optimize their layout and logistics.

Enter the Avatars

The metaverse will come alive with virtual assistants, avatars we interact with as naturally as talking to another person. They’ll work in digital factories, play in online games and provide customer service for e-tailers.

“There will be billions of avatars,” said Huang, calling them “one of the most widely used kinds of robots” that will be designed, trained and operated in Omniverse.

Digital humans and avatars require natural language processing, computer vision, complex facial and body animations and more. To move and speak in realistic ways, this suite of complex technologies must be synced to the millisecond.

It’s hard work that NVIDIA aims to simplify and accelerate with Omniverse Avatar Cloud Engine. ACE is a collection of AI models and services that build on NVIDIA’s work spanning everything from conversational AI to animation tools like Audio2Face and Audio2Emotion.

“With Omniverse ACE, developers can build, configure and deploy their avatar application across any engine in any public or private cloud,” said Simon Yuen, a senior director of graphics and AI at NVIDIA. “We want to democratize building interactive avatars for every platform.”

ACE will be available early next year, running on embedded systems and all major cloud services.

Yuen also demonstrated the latest version of Omniverse Audio2Face, an AI model that can create facial animation directly from voices.

“We just added more features to analyze and automatically transfer your emotions to your avatar,” he said.

Future versions of Audio2Face will create avatars from a single photo, applying textures automatically and generating animation-ready 3D meshes. They’ll sport high-fidelity simulations of muscle movements an AI can learn from watching a video — even lifelike hair that responds as expected to virtual grooming.

USD, a Foundation for the 3D Internet

Many superpowers of the metaverse will be grounded in USD, a foundation for the 3D internet.

The metaverse “needs a standard way of describing all things within 3D worlds,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA.

“We believe Universal Scene Description, invented and open sourced by Pixar, is the standard scene description for the next era of the internet,” he added, comparing USD to HTML in the 2D web.

Lebaredian described NVIDIA’s vision for USD as a key to opening even more opportunities than those in the physical world.

“Our next milestones aim to make USD performant for real-time, large-scale virtual worlds and industrial digital twins,” he said, noting NVIDIA’s plans to help build out support in USD for international character sets, geospatial coordinates and real-time streaming of IoT data.

Examples of NVIDIA’s planned investments in USD

To further accelerate USD adoption, NVIDIA will release a compatibility testing and certification suite for USD. It lets developers know their custom USD components produce an expected result.

In addition, NVIDIA announced a set of simulation-ready USD assets, designed for use in industrial digital twins and AI training workflows. They join a wealth of USD resources available online for free including USD-ready scenes, on-demand tutorials, documentation and instructor-led courses.

“We want everyone to help build and advance USD,” said Lebaredian.

Omniverse Expands Its Palette

One of the biggest announcements of the special address was a major new release of NVIDIA Omniverse, a platform that’s been downloaded nearly 200,000 times.

Huang called Omniverse “a USD platform, a toolkit for building metaverse applications, and a compute engine to run virtual worlds.”

The latest version packs several upgraded core technologies and more connections to popular tools.

The links, called Omniverse Connectors, are now in development for Unity, Blender, Autodesk Alias, Siemens JT, SimScale, the Open Geospatial Consortium and more. Connectors are now available in beta for PTC Creo, Visual Components and SideFX Houdini. These new developments join Siemens Xcelerator, now part of the Omniverse network, welcoming more industrial customers into the era of digital twins.

Like the internet itself, Omniverse is “a network of networks,” connecting users across industries and disciplines, said Steve Parker, NVIDIA’s vice president of professional graphics.

Examples of new features in NVIDIA Omniverse.

Nearly a dozen leading companies will showcase Omniverse capabilities at SIGGRAPH, including hardware, software and cloud-service vendors ranging from AWS and Adobe to Dell, Epic and Microsoft. A half dozen companies will conduct NVIDIA-powered sessions on topics such as AI and virtual worlds.

Speeding Physics, Animating Animals

Parker detailed several technology upgrades in Omniverse. They span enhancements for simulating physically accurate materials with the Material Definition Language (MDL), real-time physics with PhysX and the hybrid rendering and AI system, RTX.

“These core technology pillars are powered by NVIDIA high performance computing from the edge to the cloud,” Parker said.

For example, PhysX now supports soft-body and particle-cloth simulation, bringing more physical accuracy to virtual worlds in real time. And NVIDIA is fully open sourcing MDL so it can readily support graphics API standards like OpenGL or Vulkan, making the materials standard more broadly available to developers.

Omniverse also will include neural graphics capabilities developed by NVIDIA Research that combine RTX graphics and AI. For example:

Animal Modelers let artists iterate on an animal’s form with point clouds, then automatically generate a 3D mesh.
GauGAN360, the next evolution of NVIDIA GauGAN, generates 8K, 360-degree panoramas that can easily be loaded into an Omniverse scene.
Instant NeRF creates 3D objects and scenes from 2D images.

An Omniverse Extension for NVIDIA Modulus, a machine learning framework, will let developers use AI to speed simulations of real-world physics up to 100,000x, so the metaverse looks and feels like the physical world.

In addition, Omniverse Machinima — subject of a lively contest at SIGGRAPH — now sports content from Post Scriptum, Beyond the Wire and Shadow Warrior 3 as well as new AI animation tools like Audio2Gesture.

A demo from Industrial Light & Magic showed another new feature. Omniverse DeepSearch uses AI to help teams intuitively search through massive databases of untagged assets, bringing up accurate results for terms even when they’re not specifically listed in metadata.

Graphics Get Smart

One of the essential pillars of the emerging metaverse is neural graphics. It’s a hybrid discipline that harnesses neural network models to accelerate and enhance computer graphics.

“Neural graphics intertwines AI and graphics, paving the way for a future graphics pipeline that is amenable to learning from data,” said Sanja Fidler, vice president of AI at NVIDIA. “Neural graphics will redefine how virtual worlds are created, simulated and experienced by users,” she added.

AI will help artists spawn the massive amount of 3D content needed to create the metaverse. For example, they can use neural graphics to capture objects and behaviors in the physical world quickly.

Fidler described NVIDIA software to do just that, Instant NeRF, a tool to create a 3D object or scene from 2D images. It’s the subject of one of NVIDIA’s two best paper awards at SIGGRAPH.

In the other best paper award, neural graphics powers a model that can predict and reduce reaction latencies in esports and AR/VR applications. The two best papers are among 16 total that NVIDIA researchers are presenting this week at SIGGRAPH.

Neural graphics blends AI into the graphics pipeline.

Designers and researchers can apply neural graphics and other techniques to create their own award-winning work using new software development kits NVIDIA unveiled at the event.

Fidler described one of them, Kaolin Wisp, a suite of tools to create neural fields — AI models that represent a 3D scene or object — with just a few lines of code.

Separately, NVIDIA announced NeuralVDB, the next evolution of the open-sourced standard OpenVDB that industries from visual effects to scientific computing use to simulate and render water, fire, smoke and clouds.

NeuralVDB uses neural models and GPU optimization to dramatically reduce memory requirements so users can interact with extremely large and complex datasets in real time and share them more efficiently.

“AI, the most powerful technology force of our time, will revolutionize every field of computer science, including computer graphics, and NVIDIA RTX is the engine of neural graphics,” Huang said.

Watch the full special address at NVIDIA’s SIGGRAPH event site. That’s where you’ll also find details of labs, presentations and the debut of a behind-the-scenes documentary on how we created our latest GTC keynote.

The post At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution appeared first on NVIDIA Blog.

NVIDIA AI Makes Performance Capture Possible With Any Camera

NVIDIA AI tools are enabling deep learning-powered performance capture for creators at every level: visual effects and animation studios, creative professionals — even any enthusiast with a camera.

With NVIDIA Vid2Vid Cameo, creators can harness AI to capture their facial movements and expressions from any standard 2D video taken with a professional camera or smartphone. The performance can be applied in real time to animate an avatar, character or painting.

And with 3D body-pose estimation software, creators can capture full-body movements like walking, dancing and performing martial arts — bringing virtual characters to life with AI.

For individuals without 3D experience, these tools make it easy to animate creative projects, even using smartphone footage. Professionals can take it a step further, combining the pose estimation and Vid2Vid Cameo software to transfer their own movements to virtual characters for live streams or animation projects.

And creative studios can harness AI-powered performance capture for concept design or previsualization — to quickly convey an idea of how certain movements look on a digital character.

NVIDIA Demonstrates Performance Capture With Vid2Vid Cameo

NVIDIA Vid2Vid Cameo, available through a demo on the NVIDIA AI Playground, needs just two elements to generate a talking-head video: a still image of the avatar or painting to be animated, plus footage of the original performer speaking, singing or moving their head.

Based on generative adversarial networks, or GANs, the model maps facial movements to capture real-time motion, transferring that motion to the virtual character. Trained on 180,000 videos, the network learned to identify 20 key points to model facial motion — encoding the location of the eyes, mouth, nose, eyebrows and more.

These points are extracted from the video stream of the performer and applied to the avatar or digital character. See how it works in the demo below, which transfers a performance of Edgar Allan Poe’s “Sonnet — to Science” to a portrait of the writer by artist Gary Kelley.

Visual Platforms Integrate Vid2Vid Cameo, Pose Estimation by NVIDIA

While Vid2Vid Cameo captures detailed facial expressions, pose estimation AI tracks movement of the whole body — a key capability for creators working with virtual characters that perform complex motions or move around a digital scene.

Pose Tracker is a convolutional neural network model available as an Extension in the NVIDIA Omniverse 3D design collaboration and world simulation platform. It allows users to upload footage or stream live video as a motion source to animate a character in real time. Creators can download NVIDIA Omniverse for free and get started with step-by-step tutorials.

Companies that have integrated NVIDIA AI for performance capture into their products include:

Derivative, maker of TouchDesigner, a node-based real-time visual development platform, has implemented Vid2Vid Cameo as a way to provide easy-to-use facial tracking.
Notch, a company offering a real-time graphics tool for 3D, visual effects and live-events visuals, uses body-pose estimation AI from NVIDIA to help artists simplify stage setups. Instead of relying on custom hardware-tracking systems, Notch users can work with standard camera equipment to control 3D character animation in real time.
Pixotope, a leading virtual production company, uses NVIDIA AI-powered real-time talent tracking to drive interactive elements for live productions. The Norway-based company shared its work enabling interaction between real and virtual elements on screen at the most recent NVIDIA GTC.

Learn more about NVIDIA’s latest advances in AI, digital humans and virtual worlds at SIGGRAPH, the world’s largest gathering of computer graphics experts, running through Thursday, Aug. 11.

The post NVIDIA AI Makes Performance Capture Possible With Any Camera appeared first on NVIDIA Blog.

As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky

For cutting-edge visual effects and virtual production, creative teams and studios benefit from digital sets and environments that can be updated in real time.

A crucial element in any virtual production environment is a sky dome, often used to provide realistic lighting for virtual environments and in-camera visual effects. Legendary studio Industrial Light & Magic (ILM) is tapping into the power of AI to take its skies to new heights with NVIDIA AI-enabled DeepSearch and Omniverse Enterprise.

Capturing photorealistic details of a sky can be tricky. At SIGGRAPH today, ILM showcased how its team, with the NVIDIA DeepSearch tool, used natural language to rapidly search through a massive asset library and create a captivating sky dome.

The video shows how Omniverse Enterprise can provide filmmakers with the ultimate flexibility to develop the ideal look and lighting to further their stories. This helps artists save time, enhance productivity and accelerate creativity for virtual production.

After narrowing down their search results, the ILM team auditions the remaining sky domes in virtual reality to assess whether the asset will be a perfect match for the shot. By using VR, ILM can approximate what the skies will look like on a virtual production set.

The Sky’s the Limit With AI

An extensive library with thousands of references and 3D assets offers advantages, but it also presents some challenges without an efficient way to search through all the data.

Typically, users set up folders or tag items with keywords, which can be incredibly time consuming. This is especially true for a studio like ILM, which has over 40 years’ worth of material in its reference library, including photography, matte paintings, backdrops and other materials that have been captured over the decades.

With hundreds of thousands of untagged pieces of content, it’s impractical for the ILM team to manually search through them on a production schedule.

Omniverse DeepSearch, however, lets ILM search intuitively through untagged assets using text or a 2D image. DeepSearch uses AI to categorize and find images automatically — this results in massive time savings for the creative team, removing the need to manually tag each asset.

All images courtesy of Industrial Light & Magic.

“With Omniverse DeepSearch, we have the ability to search through data in real time, which is key for production,” said Landis Fields, real time principal creative at ILM. “And being able to search through assets with natural language allows for our creative teams to easily find what they’re looking for, helping them achieve the final look and feel of a scene much more efficiently than before.”

DeepSearch also works on USD files, so the ILM team can review search results and bring images into the 3D space in Omniverse Enterprise. The artists could then interact with the 3D environment using a VR headset.

With NVIDIA DeepSearch and Omniverse Enterprise, ILM has the potential to accelerate creative pipelines, lower costs and enhance production workflows to create captivating content for virtual productions.

Join NVIDIA at SIGGRAPH to learn more about the latest Omniverse announcements, watch the company’s special address on demand and see the global premiere of NVIDIA’s documentary, The Art of Collaboration: NVIDIA, Omniverse, and GTC, on Wednesday, Aug. 10, at 10 a.m. PT.

The post As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky appeared first on NVIDIA Blog.

New NVIDIA Neural Graphics SDKs Make Metaverse Content Creation Available to All

The creation of 3D objects for building scenes for games, virtual worlds including the metaverse, product design or visual effects is traditionally a meticulous process, where skilled artists balance detail and photorealism against deadlines and budget pressures.

It takes a long time to make something that looks and acts as it would in the physical world. And the problem gets harder when multiple objects and characters need to interact in a virtual world. Simulating physics becomes just as important as simulating light. A robot in a virtual factory, for example, needs to have not only the same look, but also the same weight capacity and braking capability as its physical counterpart.

It’s hard. But the opportunities are huge, affecting trillion-dollar industries as varied as transportation, healthcare, telecommunications and entertainment, in addition to product design. Ultimately, more content will be created in the virtual world than in the physical one.

To simplify and shorten this process, NVIDIA today released new research and a broad suite of tools that apply the power of neural graphics to the creation and animation of 3D objects and worlds.

These SDKs — including NeuralVDB, a ground-breaking update to industry standard OpenVDB,and Kaolin Wisp, a Pytorch library establishing a framework for neural fields research —  ease the creative process for designers while making it easy for millions of users who aren’t design professionals to create 3D content.

Neural graphics is a new field intertwining AI and graphics to create an accelerated graphics pipeline that learns from data. Integrating AI enhances results, helps automate design choices and provides new, yet to be imagined opportunities for artists and creators. Neural graphics will redefine how virtual worlds are created, simulated and experienced by users.

These SDKs and research contribute to each stage of the content creation pipeline, including:

3D Content Creation

Kaolin Wisp – an addition to Kaolin, a PyTorch library enabling faster 3D deep learning research by reducing the time needed to test and implement new techniques from weeks to days. Kaolin Wisp is a research-oriented library for neural fields, establishing a common suite of tools and a framework to accelerate new research in neural fields.
Instant Neural Graphics Primitives – a new approach to capturing the shape of real-world objects, and the inspiration behind NVIDIA Instant NeRF, an inverse rendering model that turns a collection of still images into a digital 3D scene. This technique and associated GitHub code accelerate the process by up to 1,000x.
3D MoMa – a new inverse rendering pipeline that allows users to quickly import a 2D object into a graphics engine to create a 3D object that can be modified with realistic materials, lighting and physics.
GauGAN360 – the next evolution of NVIDIA GauGAN, an AI model that turns rough doodles into photorealistic masterpieces. GauGAN360 generates 8K, 360-degree panoramas that can be ported into Omniverse scenes.
Omniverse Avatar Cloud Engine (ACE) – a new collection of cloud APIs, microservices and tools to create, customize and deploy digital human applications. ACE is built on NVIDIA’s Unified Compute Framework, allowing developers to seamlessly integrate core NVIDIA AI technologies into their avatar applications.

Physics and Animation

NeuralVDB – a groundbreaking improvement on OpenVDB, the current industry standard for volumetric data storage. Using machine learning, NeuralVDB introduces compact neural representations, dramatically reducing memory footprint to allow for higher-resolution 3D data.
Omniverse Audio2Face – an AI technology that generates expressive facial animation from a single audio source. It’s useful for interactive real-time applications and as a traditional facial animation authoring tool.
ASE: Animation Skills Embedding – an approach enabling physically simulated characters to act in a more responsive and life-like manner in unfamiliar situations. It uses deep learning to teach characters how to respond to new tasks and actions.
TAO Toolkit – a framework to enable users to create an accurate, high-performance pose estimation model, which can evaluate what a person might be doing in a scene using computer vision much more quickly than current methods.


Image Features Eye Tracking – a research model linking the quality of pixel rendering to a user’s reaction time. By predicting the best combination of rendering quality, display properties and viewing conditions for the least latency, It will allow for better performance in fast-paced, interactive computer graphics applications such as competitive gaming.
Holographic Glasses for Virtual Reality – a collaboration with Stanford University on a new VR glasses design that delivers full-color 3D holographic images in a groundbreaking 2.5-mm-thick optical stack.

Join NVIDIA at SIGGRAPH to see more of the latest research and technology breakthroughs in graphics, AI and virtual worlds. Check out the latest innovations from NVIDIA Research, and access the full suite of NVIDIA’s SDKs, tools and libraries.

The post New NVIDIA Neural Graphics SDKs Make Metaverse Content Creation Available to All appeared first on NVIDIA Blog.

Upping the Standard: NVIDIA Introduces NeuralVDB, Bringing AI and GPU Optimization to Award-Winning OpenVDB

NVIDIA today announced NeuralVDB, which brings the power of AI to OpenVDB, the industry-standard library for simulating and rendering sparse volumetric data, such as water, fire, smoke and clouds.

Building on the past decade’s development of OpenVDB, the introduction at SIGGRAPH of NeuralVDB is a game-changer for professionals working in areas like scientific computing and visualization, medical imaging, rocket science and visual effects. By reducing memory footprint by up to 100x, it allows creators, developers and researchers to interact with extremely large and complex datasets in real time.

Over the past decade, OpenVDB has earned Academy Awards as a core technology used throughout the visual-effects industry. It has since grown beyond entertainment to industrial and scientific use cases where sparse volumetric data is prevalent, such as industrial design and robotics.

Last year, NVIDIA introduced NanoVDB, which added GPU support to OpenVDB. This delivered an order-of-magnitude speedup, enabling faster performance and easier development — and opening the door to real-time simulation and rendering.

NeuralVDB builds on the GPU acceleration of NanoVDB by adding machine learning to introduce compact neural representations that dramatically reduce its memory footprint. This allows 3D data to be represented at even higher resolution and at a much larger scale than OpenVDB. The result is that users can easily handle massive volumetric datasets on devices like individual workstations and even laptops.

NeuralVDB offers a significant efficiency improvement over OpenVDB by compressing a volume’s memory footprint up to 100x compared to NanoVDB. This allows users to transmit and share large, complex volumetric datasets much more efficiently.

To accelerate training up to 2x, NeuralVDB allows the weights of a frame to be used for the subsequent one. NeuralVDB also enables users to achieve temporal coherency, or smooth encoding, by using the network results from the previous frame.

Hitting this trifecta of dramatically reducing memory requirements, accelerating training and enabling temporal coherency allows NeuralVDB to unlock new possibilities for scientific and industrial use cases, including massive, complex volume datasets for AI-enabled medical imaging, large-scale digital twin simulations and more.

Learn more about NeuralVDB.

Watch the NVIDIA special address at SIGGRAPH on demand, and join NVIDIA at the conference through Thursday, Aug. 11, to see more of the latest technology breakthroughs in graphics, AI and virtual worlds.

The post Upping the Standard: NVIDIA Introduces NeuralVDB, Bringing AI and GPU Optimization to Award-Winning OpenVDB appeared first on NVIDIA Blog.

Generated by Feedzy