Music to the Gears: NVIDIA’s Clément Farabet on Orchestrating AI Training for Autonomous Vehicles

Autonomous vehicles are one of the most complex AI challenges of our time. For AVs to operate safely in the real world, the networks running within them must come together as an intricate symphony, which requires intensive training, testing and validation on massive amounts of data.

Clément Farabet, vice president of AI infrastructure at NVIDIA, is a proverbial maestro behind the AV development orchestra. He’s applying nearly 15 years of experience in deep learning — including building Twitter’s AI machine — to teach neural networks how to perceive and react to the world around them.

The AI Podcast · NVIDIA’s Clément Farabet on Orchestrating AI Training for Autonomous Vehicles – Ep. 175

Farabet sat down with NVIDIA’s Katie Burke Washabaugh on the latest episode of the AI Podcast to discuss how the early days of deep learning led to today’s flourishing AV industry, and how he’s approaching deep neural network development.

Tapping into the NVIDIA SaturnV supercomputer, Farabet is designing a highly scalable data factory to deliver intelligent transportation in the near term, while looking ahead to the next frontiers in AI.

You Might Also Like

Lucid Motors’ Mike Bell on Software-Defined Innovation for the Luxury EV Brand

AI and electric-vehicle technology breakthroughs are transforming the automotive industry. These developments pave the way for new innovators, attracting technical prowess and design philosophies from Silicon Valley. Hear how Lucid Motors is applying a tech-industry mindset to develop software-defined vehicles that are always at the cutting edge.

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

The post Music to the Gears: NVIDIA’s Clément Farabet on Orchestrating AI Training for Autonomous Vehicles appeared first on NVIDIA Blog.

Sensational Surrealism Astonishes This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D phenom FESQ joins us In the NVIDIA Studio this week to share his sensational and surreal animation Double/Sided as well as an inside look into his creative workflow.

FESQ’s unique cyberpunk style with a futuristic aesthetic, rooted in emotion, screams originality.

Double/Sided is deeply personal to FESQ, who said the piece “translates really well to a certain period of my life when I was juggling both a programmer career and an artist career.”

He candidly admitted “that time was pretty hard on me with some intense work hours, so I had the constant lingering feeling that I needed to choose one or the other.”

The piece eloquently and cleverly displays this duality with flowers representing nature and FESQ’s passion for creativity, while the skull contains elements of tech, all with a futuristic cyberpunk aesthetic.

Duality Examined

Double/Sided, like most of FESQ’s projects, was carefully researched and concepted, using Figma to create moodboards and gather visual references. Literal stick figures and sketches allow him to lay out possible compositions and configurations, scanned into Figma, complementing his moodboard, which prepared him to begin the 3D stage.

FESQ deployed Cinema 4D to build out the base model for the skull. Cinema 4D let him select from popular GPU-accelerated 3D renderers, such as V-Ray, OctaneRender and Redshift, giving him the freedom to switch depending on which renderer is more advantageous.

“Double/Sided” base model with supplemental assets.

As his system is equipped with a GeForce RTX 3080 Ti GPU, the viewport becomes GPU-accelerated, enabling smooth interactivity while editing the 3D model. Satisfied with the look, FESQ turned his attention towards creating supplemental assets that were placed on the skull, such as the flowers and electrical emitters. FESQ often tabs Daz Studio at this point in his projects. While not needed with Double/Sided, Daz offers the largest 3D model library with a wide selection of free and premium 3D content, and artists benefit from its RTX-accelerated AI denoiser.

Individual flowers are created then exported into Cinema 4D.

FESQ quickly renders out high-quality files with his GPU’s RTX-accelerated NVIDIA Iray renderer, saving valuable time without having to wait.

This shade of purple is just right.

Next, FESQ pivoted to Adobe Substance 3D Painter to apply colors and textures. This “might be one of the most important aspects of my work,” he stated.

And for good reason, as FESQ is colorblind. One of the more challenging aspects in his creative work is distinguishing between different colors. This makes FESQ’s ability to create stunning, vibrant art all the more impressive.

FESQ then applied various colors and light materials directly to his 3D model. NVIDIA RTX and NVIDIA Iray technology in the viewport enabled him to ideate in real time and use ray-traced baking for faster rendering speeds — all accelerated by his GPU.

 

He returned to Cinema 4D to rig the asset, apply meshes and finish animating the scene, leaving final composite work to be completed in Adobe After Effects.

Realism can be further enhanced by adding accurate depth effects. For more insights, watch FESQ’s Studio Session tutorial Using MoGraph to Create Depth in Animations in Cinema 4D & Redshift.

FESQ’s color scheme manifested over time as the consistent use of red and blue morphs into a distinct purple.

Here FESQ used the Lumetri Color effect panel to apply professional-quality grading and color correction tools to the animation, directly on his timeline, with GPU-accelerated speed. The Glow feature, also GPU accelerated, added the neon light look that makes Double/Sided simply stunning.

For tips on how to create neon cables like these, check out FESQ’s Studio Session tutorial Easily Create Animated Neon Cables in Cinema 4D & Redshift to bring animated pieces to life.

 

FESQ couldn’t contemplate how he’d complete his vision without his GPU, noting “pretty much my entire workflow relies on GPU acceleration.”

3D artist FESQ.

Artists seeking ways to create surreal landscapes can view FESQ’s Studio Session tutorial Creating Surreal Landscapes Using Cloners in Cinema 4D & Redshift.

Check out FESQ’s Instagram for a sampling of his work.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Sensational Surrealism Astonishes This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Meet the Omnivore: Developer Builds Bots With NVIDIA Omniverse and Isaac Sim

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Antonio Serrano-Muñoz

While still in grad school, Antonio Serrano-Muñoz has helped author papers spanning planetary gravities, AI-powered diagnosis of rheumatoid arthritis and robots that precisely track millimetric-sized walkers, like ants.

Now, the Ph.D. student in applied robotics at Mondragon University in northern Spain, is using robotics and AI in remanufacturing to tackle issues of climate change and pollution.

In short, Serrano-Muñoz is busy discovering unique ways to apply technology to complex, real-world issues — and, in his free time, he makes extensions for NVIDIA Omniverse, a platform for real-time 3D design collaboration and world simulation.

Omniverse Extensions are core building blocks that enable anyone to create and extend the functionality of Omniverse Apps to meet the specific needs of their workflows with just a few lines of Python code.

Serrano-Muñoz has created six open-source Omniverse Extensions that can be accessed on GitHub, one of which enhances the capabilities of NVIDIA Isaac Sim, an Omniverse-powered application framework for creating photorealistic, physically accurate virtual environments in which to develop, train and simulate AI robots.

“Since the beginning of my Ph.D. studies, I decided to work with Isaac Sim to simulate robots,” Serrano-Muñoz said. “It offers a powerful, real-time simulation platform with an ultra-realistic physics engine and graphics — as well as a clean, simple application programming interface that makes it easy to interact with the whole system.”

Omniverse for Robotics Simulation

Serrano-Muñoz has pursued robotics for as long as he can remember. Growing up in Cuba, he always fiddled with electronics, and he fell deeper in love with technology when he began coding in college.

“Robots can assist humans with strenuous, high-precision, repetitive and sometimes hazardous tasks,” Serrano-Muñoz said. “They have the potential to improve our lives, and I hope my work advances robotics in a way that allows us to build a better present and achieve a better future.”

He believes Omniverse is crucial to his doctoral studies in applied robotics.

“Performing real-time, graphically realistic simulations of robotics environments wasn’t possible before Omniverse,” he said. “The platform opens the door to a new era of revolutionary changes in robotics, simulation and real-time collaboration.”

Omniverse links specialists of all kinds — engineers, designers, content creators — for the development of simulation systems, he added. Key for this is Universal Scene Description (USD), an open source 3D scene description and extensible file framework serving as the common language for virtual worlds.

“USD plays an important role in the process of authoring, composing and reading a hierarchically organized scene to create and manipulate its rendering elements and objects,” Serrano-Muñoz said.

Extending Omniverse for Isaac Sim

Using NVIDIA Omniverse, the developer has created robot simulations for industrial use cases and a digital twin of Mondragon University’s laboratory for robotics prototyping.

A digital twin of a bench in Mondragon University’s robotics lab, made with NVIDIA Omniverse.

And while working on such projects, Serrano-Muñoz wanted to integrate with Isaac Sim a tool he was already familiar with: Robot Operating System, or ROS, a set of software libraries for building robot applications. So, he created an Omniverse Extension to enable just that.

The extension lets users manipulate simulated robotic systems in the Omniverse-powered Isaac Sim application via ROS control interfaces. ROS MoveIt, a motion planning framework for robots, can be used in conjunction with Isaac Sim’s dynamic control extension and PhysX capabilities, which bring physical accuracy to high-fidelity robotics simulations.

“It’s easy to develop code without leaving the Omniverse Kit,” Serrano-Muñoz said. “Omniverse Extensions come with a system-wide integration API, installation, activation and reload mechanisms to augment the functionality of Omniverse Apps.”

This particular extension for ROS, he added, boosts agile prototyping for robotics applications — which is further accelerated by his NVIDIA RTX 3080 Laptop GPU — making his workflow faster than ever.

Hear more from Serrano-Muñoz about using digital twins for industrial robotics by watching his NVIDIA GTC session on demand. And watch his Community Spotlight on the Omniverse Twitch channel happening Aug. 3 at 11 a.m. PT.

Join in on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Developers like Serrano-Muñoz will join NVIDIA at SIGGRAPH, a global computer graphics conference running Aug. 8-11. Watch the Omniverse community livestream at SIGGRAPH on Tuesday, Aug. 9, at noon PT to learn how Omniverse and other design and visualization technologies are driving breakthroughs in graphics and GPU-accelerated software.

Plus, anyone can submit to the inaugural #ExtendOmniverse developer contest through Friday, Aug. 19. Create an Omniverse Extension using Omniverse Code for a chance to win an NVIDIA RTX GPU.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Developer Builds Bots With NVIDIA Omniverse and Isaac Sim appeared first on NVIDIA Blog.

What Is a QPU?

Just as GPUs and DPUs enable accelerated computing today, they’re also helping a new kind of chip, the QPU, boot up the promise of quantum computing.

In your hand, a quantum processing unit might look and feel very similar to a graphics or a data processing unit. They’re all typically chips, or modules with multiple chips, but under the hood the QPU is a very different beast.

So, What’s a QPU?

A QPU, aka a quantum processor, is the brain of a quantum computer that uses the behavior of particles like electrons or photons to make certain kinds of calculations much faster than processors in today’s computers.

QPUs rely on behaviors like superposition, the ability of a particle to be in many states at once, described in the relatively new branch of physics called quantum mechanics.

By contrast, CPUs, GPUs and DPUs all apply principles of classical physics to electrical currents. That’s why today’s systems are called classical computers.

QPUs could advance cryptography, quantum simulations and machine learning and solve thorny optimization problems.

QPUsGPUs

Quantum processing unitsGraphics processing units

Relies on quantum physicsRelies on classical physics

Uses qubits that can be more than 0 and 1Uses bits that are either 0 or 1

Uses states of subatomic particlesUses electricity switched in transistors

Great for cryptography and simulating quantum effectsGreat for HPC, AI and classical simulations

How Does a Quantum Processor Work?

CPUs and GPUs calculate in bits, on/off states of electrical current that represent zeros or ones. By contrast, QPUs get their unique powers by calculating in qubits — quantum bits that can represent many different quantum states.

A qubit is an abstraction that computer scientists use to express data based on the quantum state of a particle in a QPU. Like the hands on a clock, qubits point to quantum states that are like points in a sphere of possibilities.

The power of a QPU is often described by the number of qubits it contains. Researchers are developing additional ways to test and measure the overall performance of a QPU.

Many Ways to Make a Qubit

Corporate and academic researchers are using a wide variety of techniques to create the qubits inside a QPU.

The most popular approach these days is called a superconducting qubit. It’s basically made from one or more tiny metallic sandwiches called Josephson junctions, where electrons tunnel through an insulating layer between two superconducting materials.

Qubits inside IBM’s Eagle superconducting QPU.

The current state of the art creates more than 100 of these junctions into a single QPU. Quantum computers using this approach isolate the electrons by cooling them to temperatures near absolute zero with powerful refrigerators that look like high-tech chandeliers. (See image below.)

A Qubit of Light

Some companies use photons rather than electrons to form qubits in their quantum processors. These QPUs don’t require expensive, power-hungry refrigerators, but they need sophisticated lasers and beam splitters to manage the photons.

A refrigeration unit for a superconducting quantum computer.

 

Researchers are using and inventing other ways to create and connect qubits inside QPUs. For example, some use an analog process called quantum annealing, but systems using these QPUs have limited applications.

It’s early days for quantum computers, so it’s not yet clear what sorts of qubits in what kinds of QPUs will be widely used.

Simple Chips, Exotic Systems

Theoretically, QPUs may require less power and generate less heat than classical processors. However, the quantum computers they plug into can be somewhat power hungry and expensive.

That’s because quantum systems typically require specialized electronic or optical control subsystems to precisely manipulate particles. And most require vacuum enclosures, electromagnetic shielding or sophisticated refrigerators to create the right environment for the particles.

D-Wave shows qubits and QPU in a full system.

That’s one reason why quantum computers are expected to live mainly in supercomputing centers and large data centers.

QPUs Do Cool Stuff

Thanks to the complex science and technology, researchers expect the QPUs inside quantum computers will deliver amazing results. They are especially excited about four promising possibilities.

First, they could take computer security to a whole new level.

Quantum processors can factor enormous numbers quickly, a core function in cryptography. That means they could break today’s security protocols, but they can also create new, much more powerful ones.

In addition, QPUs are ideally suited to simulating the quantum mechanics of how stuff works at the atomic level. That could enable fundamental advances in chemistry and materials science, starting domino effects in everything from the design of lighter airplanes to more effective drugs.

Researchers also hope quantum processors will solve optimization problems classical computers can’t handle in fields like finance and logistics. And finally, they may even advance machine learning.

So, When Will QPUs Be Available?

For quantum researchers, QPUs can’t come soon enough. But challenges span the gamut.

On the hardware level, QPUs are not yet powerful or dependable enough to tackle most real-world jobs. However, early QPUs — and GPUs simulating them with software like NVIDIA cuQuantum — are beginning to show results that help researchers, especially in projects exploring how to build better QPUs and develop quantum algorithms.

Researchers are using prototype systems available through several companies like Amazon, IBM, IonQ, Rigetti, Xanadu and more. Governments around the world are beginning to see the promise of the technology, so they’re making significant investments to build ever larger and more ambitious systems.

How Do You Program a Quantum Processor?

Software for quantum computing is still in its infancy.

Much of it looks like the kind of assembly-language code programmers had to slog through in the early days of classical computers. That’s why developers have to understand the details of the underlying quantum hardware to get their programs running.

But here, too, there are real signs of progress toward the holy grail — a single software environment that will work across any supercomputer, a sort of quantum OS.

Several early projects are in the works. All struggle with the limitations of the current hardware; some are hampered by the limits of the companies developing the code.

For example, some companies have deep expertise in enterprise computing but lack experience in the kind of high-performance environments where much of the scientific and technical work in quantum computing will be done. Others lack expertise in AI, which has synergies with quantum computing.

Enter Hybrid Quantum Systems

The research community widely agrees that for the foreseeable future, classical and quantum computers will work in tandem. So, software needs to run well across QPUs, CPU and GPUs, too.

Researchers described a hybrid classical-quantum computer in a 2017 paper.

To drive quantum computing forward, NVIDIA recently announced the NVIDIA Quantum Optimized Device Architecture (QODA), an open platform for programming hybrid quantum systems.

QODA includes a high-level language that’s concise and expressive so it’s powerful and easy to use. With QODA, developers can write programs that run on QPUs in quantum computers and GPUs simulating QPUs in classical systems.

NVIDIA QODA provides developers a unified platform for programming any hybrid quantum-classical computer.

QODA will support every kind of quantum computer and every sort of QPU.

At its launch, quantum system and software providers including Pasqal, Xanadu, QC Ware and Zapata expressed support for QODA. Users include major supercomputing centers in the U.S. and Europe.

QODA builds on NVIDIA’s extensive expertise in CUDA software, which accelerates HPC and AI workloads for scientific, technical and enterprise users.

With a beta release of QODA expected before the end of the year, the outlook for QPUs in 2023 and beyond is bright.

—Yunchao Liu, a Ph.D. candidate in quantum computing at the University of California, Berkeley, assisted in the research for this article.

 

The post What Is a QPU? appeared first on NVIDIA Blog.

1,650+ Global Interns Gleam With NVIDIA Green

A record number of interns calls for a record-sized celebration.

In our largest contingent ever, over 1,650 interns from 350+ schools started with NVIDIA worldwide over the past year.

Amidst busy work days tackling real-world projects across engineering, automation, robotics and more, the group’s also finishing up a three-day celebration, culminating today with National Intern Day. Events ranged from tech demos to virtual meditation and yoga classes to an exclusive Q&A with NVIDIA founder and CEO Jensen Huang.

The three stories below highlight the meaningful work of our interns — who are roughly half undergrads, half grad students — and the connections they’ve been forging.

Bailey Tinkers His Way Into Computer Engineering

Ever since he started tinkering with PC parts at age 12, Darryl Bailey — a computer engineering student at Georgia Tech — knew he wanted to work with computers.

He’s doing just that this summer as an ASIC verification intern on the Compute Express Link team, which ensures bug-free pre-silicon design across multiple GPUs. So far, he’s worked on a dashboard that will simplify the workflows of design verification engineers.

“This project specifically has a high impact because the script is going to be used by all design verification engineers at NVIDIA,” he said. “It’s a really cool feeling to have my code out there in action.”

Bailey hasn’t just been gaining technical skills here. He’s also honed his work style and learned how to use soft skills to most effectively wield the hard skills he’s acquired at school.

“The most important thing I’ve gotten out of this internship is that we’re all just one big team,” he said. “I realized that as much as I want to learn and dive into everything related to computers, it’s also okay to just focus on one thing, because we’re all working together towards the final goal.”

Kulkarni Pioneers a Path in Engineering

This summer, Seema Kulkarni, an electrical and computer engineering student at the University of Texas at Austin, joined NVIDIA as a software R&D intern working on NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows.

Kulkarni comes from a background with limited early exposure to careers in tech. Her academic journey initially suggested a future in finance and marketing. But, wanting to make a more tangible impact, she switched over to engineering.

“I really loved sitting in on the Women’s Leadership Panel hosted by the University Recruiting team because it felt inspiring to know that even as a woman, you can stay in technical fields for a long time and love it,” she said. “Seeing these female leaders drive innovation here at NVIDIA affirmed that no matter where you’re at, there’s always room to flourish.”

So far, Kulkarni has been working on projects like building asset validators that will simplify user interface for NVIDIA Omniverse users and debugging Universal Scene Description code to resolve critical issues. It’s taught her to be a better software engineer because it’s challenged her to think in the way that the engineers that came before her did.

Kim Kicks Off Her Second Round in Technical Writing

Writing may not be the first thing that comes to mind when one thinks about an internship at NVIDIA.

But as JJ Kim, a marketing major from Boston University, points out, communication is key in any business, even in tech.

This summer, she’sKim is  returning for her second stint on the enterprise marketing team as a technical writing intern. She’s assisted with SIGGRAPH preparations and is churning out explainer blogs, which break down technical concepts in a digestible, approachable way.

“I always feel like I’m learning new things when I write these explainer blogs because they do such a good job of helping someone like me — who has limited technical knowledge — understand what it is that NVIDIA technology does and the impact that it’s making,” she said.

Kim says what’s brought her back for a second internship is NVIDIA’s inclusive, welcoming culture.

“Everyone is so willing to help, which makes work feel like such a safe environment,” she said. “I’m not afraid to try new things out or ask questions because I have such an amazing team of experienced people to work with.”

Read more about NVIDIA’s internship program. Applications are accepted year-round.  

The post 1,650+ Global Interns Gleam With NVIDIA Green appeared first on NVIDIA Blog.

Pony.ai Express: New Autonomous Trucking Collaboration Powered by NVIDIA DRIVE Orin

More than 160 years after the legendary Pony Express delivery service completed its first route, a new generation of “Pony”-emblazoned vehicles are taking an AI-powered approach to long-haul delivery.

Autonomous driving company Pony.ai announced today a partnership with SANY Heavy Truck (SANY), China’s largest heavy equipment manufacturer, to jointly develop level 4 autonomous trucks. The vehicles will run on the Pony.ai autonomous driving controller (ADC), built on NVIDIA DRIVE Orin.

The joint venture plans to produce a mixed fleet of new energy and internal combustion engine vehicles, with the goal of increasing the share of alternative powertrains over time.

Long-haul trucks play a vital role in the global economy. Around the world, companies delivered a total of 257 billion packages in 2021 — a number expected to nearly double in four years, according to Guidehouse consultancy.

The addressable market for trucking is substantial. In 2021 alone, heavy truck sales in China exceeded 1.3 million units, the China Association of Automobile Manufacturers reported.

At the same time, persistent worker shortages are putting pressure on the industry, with drivers making up 25-30% of the total cost of long-haul trucking.

Companies are beginning to turn toward autonomous trucks to help ease these bottlenecks —  Guidehouse estimates that, by 2032, more than 1.2 million automated trucks and buses will be deployed worldwide each year, with more than half of those vehicles operating in China.

Pony.ai and SANY are working to deliver automated solutions even sooner, with small-scale deliveries beginning this year, and mass production planned for 2024. Within a few years, the venture will reach an annual production of around 10,000 trucks.

From Horsepower to AI Power

The new joint venture is able to achieve near-term deployment by building these autonomous trucks on the high-performance, automotive-grade NVIDIA DRIVE Orin system-on-a-chip.

In June, Pony.ai unveiled its latest generation ADC, designed for the low-latency and real-time deep neural network (DNN) performance necessary for safe autonomous driving. The AV platform includes both single and dual DRIVE Orin configurations, achieving 254 and 508 trillion operations per second (TOPS), respectively.

As a software-defined compute platform, DRIVE Orin is designed to handle large numbers of applications and DNNs simultaneously, while achieving systematic safety standards, to ensure safe, highly advanced intelligent driving.

With DRIVE Orin, Pony.ai and SANY’s autonomous trucks can seamlessly handle the demands of long-haul shipping, running around the clock.

Working Together for More Intelligent Delivery

Collaboration is a key driver behind deploying autonomous vehicles in the short term.

In addition to working with SANY, Pony.ai recently formed a joint venture with logistics platform provider Sinotrans, one of the largest logistics companies in China, to develop an intelligent logistics network.

By combining smart fleet management with software-defined vehicles, these joint ventures are poised to revolutionize delivery, creating a powerful standard and framework for future development in the industry.

And with NVIDIA DRIVE Orin at the core, these fleets are redefining horsepower for the long haul.

The post Pony.ai Express: New Autonomous Trucking Collaboration Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Welcome Back, Commander: ‘Command & Conquer Remastered Collection’ Joins GeForce NOW

Take a trip down memory lane this week with an instantly recognizable classic, Command & Conquer Remastered Collection, joining the nearly 20 Electronic Arts games streaming from the GeForce NOW library.

Speaking of remastered, GeForce NOW members can enhance their gameplay further with improved resolution scaling in the 2.0.43 app update. When the feature is turned on, a game’s resolution will match that of the screen it’s being played on, whether PC, Mac or SHIELD TV, for better image quality. Check if your system supports Enhanced mode on this page.

Canadian gamers across British Columbia and Alberta now get a two-year NVIDIA GeForce NOW Priority Membership for free when they sign-up for the TELUS PureFibre Ultimate Gamer bundle at $120 CAD per month for the first two years. For more information and to sign up, visit www.telus.com/gamers.

Plus, explore the nine new titles streaming from the GeForce NOW library this week.

Embark on Your Greatest Exploit

Kane lives! The game that defined the real-time strategy genre is back, remastered in glorious 4K resolution and streaming at GeForce quality from the cloud. Lead one of two playable factions — the Global Defense Initiative or the Brotherhood of Nod — through 100+ campaign missions while building up your base, collecting resources and crushing the competition.

The collection includes the critically acclaimed titles Command & Conquer and Command & Conquer: Red Alert, along with three expansions, rebuilt multiplayer, a modernized user interface, a map editor feature, bonus footage gallery and over seven hours of remastered music.

 

Developed by former Westwood Studios team members at Petroglyph Games, alongside the Command & Conquer player community, the collection maintains the “easy-to-play, difficult-to-master” quality that fans know and love about the franchise.

Stream the collection across nearly all devices, even on MacsRTX 3080 members can command up to 4K resolution and 60 frames per second from the PC and Mac apps, in the living room on the updated SHIELD TV, and on mobile devices, including the ability to stream at 120 frames per second on 120Hz Android phones. They can also conquer opponents with ultra-low latency for the best gaming sessions.

New Games on the Block

A new season brings summer time and new in-game fun to ‘Riders Republic’ from now until August 10th.

After a dose of revamped nostalgia, members can also check out the new Season 3 ‘Summer Break’ event of Ubisoft’s Riders Republic, or the newest content update for Hunt: Showdown, The Prescient Night. If that’s not enough, here are the nine new games arriving to the cloud this week:

Sweet Transit (New release on Steam, July 28)
Fire Commander (New release on Steam)
Blasphemous (Epic Games Store)
Command & Conquer Remastered Collection (Steam and Origin)
Genesis Alpha One (Epic Games Store)
This Is the Police (Epic Games Store)
This Is the Police 2 (Epic Games Store)
Warhammer Age of Sigmar: Storm Ground (Epic Games Store)
Roguebook (Epic Games Store)

While you’re planning out how you’ll conquer your weekend, we’ve got a strategy question for you. Let us know in the comments below or on Twitter.

When it comes to strategy, the best defense is a good offense: discuss.

— NVIDIA GeForce NOW (@NVIDIAGFN) July 27, 2022

The post Welcome Back, Commander: ‘Command & Conquer Remastered Collection’ Joins GeForce NOW appeared first on NVIDIA Blog.

NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class

Selecting the right laptop is a lot like trying to pick the right major. Both can be challenging tasks where choosing wrongly costs countless hours. But pick the right one, and graduation is just around the corner.

The tips below can help the next generation of artists select the ideal NVIDIA Studio laptop to maximize performance for the critical workload demands of their unique creative fields — all within budget.

Students face a wide range of creative fields — including 3D animation, video, graphic design and photography — that require differing levels of performance and raise a gauntlet of questions: What laptop specs are needed for specific workflows? How will this laptop perform for the most challenging projects? How can one future-proof their digital canvas?

Studio laptops are the perfect back-to-school tool, featuring NVIDIA RTX GPUs dedicated to accelerating 3D, video and AI features — especially 3D rendering and video editing. The most popular creative apps have been optimized to run on RTX GPUs for faster performance. These purpose-built laptops feature vivid, color-accurate displays and blazing-fast memory and storage to boost creative workflows.

An AI for Animation

3D students create new worlds from scratch — shaping 3D objects and scenes with modeling tools, bringing characters to life with animation features, and creating realistic volumetric effects such as fire and smoke.

Animators routinely work on computationally intensive projects, often across multiple creative apps at once. The right GPU is critical for 3D workflows. Select a less powerful system, and artists will be hustling to wait.

MSI Creator z16 NVIDIA Studio laptop.

Laptops equipped with a GeForce RTX 3070 Ti Laptop GPU or higher — including the MSI Creator z16, available at Best Buy — tackle demanding 3D projects in real time. These high-performance graphics cards prevent slowdowns, even while working on large projects.

Razer Blade 15 NVIDIA Studio laptop.

Students looking for an added boost can jump to a system with a GeForce RTX 3080 or 3080 Ti GPU and up to 16 gigabytes of graphics memory for extra-demanding, extra-large models. Laptops like the Razer Blade 15 with an RTX 3080 Ti Laptop GPU, available on Razer.com, provide incredible performance that takes 3D from concept to completion faster.

The benefits of a Studio laptop can be felt in the combination of hardware and software. All essential 3D apps — Adobe Substance 3D, Autodesk Maya, Chaos V-Ray, Unreal Engine and more — feature RTX-accelerated benefits. Real-time ray tracing, AI denoising in the viewport and countless other features empower interactive 3D modeling and animations without the painstaking wait for renders.

Performance is measured using various systems and will vary by model. Blender 3.1 OpenData benchmark, with the OptiX and Metal render engines (respectively).

RTX GPUs exclusively power NVIDIA Omniverse, a ground-breaking platform that interconnects existing 3D workflows. Replacing linear pipelines with live-sync creation, artists can create like never before, at unprecedented speeds. Best of all, it’s a free download, available on all Studio laptops.

Reel-ey Fast Film Projects

Gone are the endless days spent in dingy university editing bays. Future filmmakers can grasp the same power from NVIDIA GPUs that every visual effects Oscar nominee in the last 14 years has used, in portable form.

Tasked with color grading, editing 8K raw footage, adding visual and motion effects, ordering and fine-tuning content, film students will often need to experiment with styles and techniques.

Studio laptops use NVIDIA technologies — like the NVIDIA Encoder (NVENC) for faster encoding and Tensor Cores for time-saving AI features — to rapidly accelerate video work, giving students the necessary time to hone their craft.

These laptops also feature best-in-class displays with 100% sRGB that’s a perfect match for what most people will see, or 95% DCI-P3 for advanced editing to reach a wider range of colors on UHD and HDR monitors. Most models offer factory calibration and HDR options as well.

Dell XPS 15 NVIDIA Studio laptop.

Students editing up to 4K footage can pick up from Dell.com the Dell XPS 15 with a GeForce RTX 3050 Ti Laptop GPU. Or, they can make the leap to the Dell XPS 17 with a GeForce RTX 3060 Laptop GPU and larger screen.

ASUS ZenBook Pro Duo NVIDIA Studio laptop.

The proper GPU depends on a project’s needs. Film students will want the power of either a GeForce RTX 3060 or 3070 Laptop GPU to ensure they can comfortably work with up to 6K footage. The ASUS ZenBook Pro Duo 15 OLED UX582 is available from Amazon, configured with an RTX 3060, RTX 3070 or RTX 3080 Laptop GPU. The handy ScreenPad Plus 14-inch 4K matte touch screen, the Duo’s secondary display, is perfect for displaying video timelines.

For video work that’s in 8K RAW, or heavy on visual effects, the GeForce RTX 3080 Ti Laptop GPU is highly recommended. This GPU features 16 gigabytes of dedicated VRAM, like the ones recommended for 3D animation, ensuring smooth production.

Performance is measured using various systems and will vary by model. Adobe Premiere Pro export test measured through Adobe Media Encoder 14.3.2 using various 4K sequences with a combination of typical effects.

RTX GPUs feature a hardware-based encoder and decoder: NVENC and NVDEC. Offloading these compute-intensive tasks from the CPU enables industry-leading render speeds, plus smooth playback and scrubbing of high-res video. Color-correction tools in Blackmagic DaVinci Resolve are GPU accelerated, as well as 30+ visual and motion features in Adobe Premiere Pro and After Effects.

Students also have access to time-saving AI features like DaVinci Resolve’s Face Recognition for automatically tagging clips and Speedwarp to produce stunning slow motion. AI also improves production quality, like with Topaz Labs Video Enhance AI, which increases video resolution while retaining high-fidelity details with a single click.

Studio laptops with GeForce RTX 30 Series GPUs speed up every phase of the video production process.

AI-Accelerated Photography and Graphic Design

Capturing the perfect shot is only the start these days. Photography majors also have to touch up their photos — adjusting lighting and shadows, applying filters and layers, as well as tweaking other fine details. Graphic design students will perform similar tasks, with a greater emphasis on web design, graphics and more.

While many modern computers are capable of accomplishing these tasks, all of the popular applications have AI-accelerated features that can dramatically improve efficiency when paired with GeForce RTX GPUs found in NVIDIA Studio laptops.

The Microsoft Surface Laptop Studio takes on many forms.

The nicely equipped Microsoft Surface Laptop Studio offers an optional GeForce RTX 3050 Ti Laptop GPU, giving it ample power to tackle an ever-growing list of RTX-accelerated apps and AI features. Available at Microsoft.com, the boundary-pushing design lets students flex their creative muscles on the sleek 14.4-inch PixelSense Flow touch screen. Its versatile design ensures photographers can touch up their photos, their way.

Lenovo Yoga Slim 7i Pro X Studio laptop

Lenovo’s Yoga Slim 7i Pro X with a GeForce RTX 3050 Laptop GPU is where powerful performance meets real portability. Great for on-the-go workflows, the laptop handles all photography and graphic design tasks with ease. Its 100% sRGB color space and 100% color-volume display is calibrated for true-to-life Delta E<1 accuracy. The stylish, ultra-slim laptop weighs about three pounds and has an impressive battery life for those long classroom lectures.

AI features reduce repetitious tasks for students on the go. Adobe Photoshop’s RTX-accelerated Super Resolution uses AI to upscale images with higher quality than standard methods. Photoshop Lightroom’s Enhance Details feature refines fine color details of high-resolution RAW images.

Features like Select Subject, which isolates people, and Select Sky, which captures skylines, are massive time-savers. Consider Topaz Lab’s collection of AI apps that denoise, increase resolution and sharpen images with a click.

Systems for STEM

The science, technology, engineering and mathematics fields are growing in just about every way imaginable. More jobs, applications, majors and students are centering on STEM.

Studio laptops are equipped with NVIDIA RTX GPUs that provide acceleration for the top engineering, computer science, data science and economics applications. With real-time rendering for complex designs and simulations, faster image and signal processing, and the ability to develop larger, more accurate AI and data science models, students can spend more time learning and less time waiting.

HP Envy 16 NVIDIA Studio laptop.

Ansys Discovery’s Live Simulation mode only runs on NVIDIA GPUs, like the HP Envy 16 available at HP.com and Acer ConceptD 5 Pro from B&H. Both Studio laptops come equipped with a GeForce RTX 3060 Laptop GPU.

Acer ConceptD 5 Pro NVIDIA Studio laptop.

Engineering students looking for additional computer power can upgrade to a Studio laptop with an RTX 3080 Ti Laptop GPU to run SOLIDWORKS up to 8x faster than with the average GPU. The same system is capable of running RAPIDS — GPU-accelerated data analytics and machine learning — up to 15x faster than an average laptop, while TensorFlow and ResNet50 training clocks in at a whopping 50x faster.

Get Your Game On

NVIDIA Studio laptops accelerate coursework, giving students loads of free time back — time that could be spent gaming. And Studio laptops come equipped with all the GeForce RTX benefits for gaming, delivering the most realistic and immersive graphics, increased frame rates with DLSS, the lowest system latency for competitive gaming, and stutter-free, top-quality live streaming.

Live streaming has become all the rage, and having a system that can keep up is essential. Fortunately, GeForce RTX Laptop GPUs scale with dream streams.

The GIGABYTE AERO 15 NVIDIA Studio laptop.

Studio laptops with a GeForce RTX 3070 or 3070 Ti Laptop GPU are the sweet spot for live streaming, offering 1080p streams while gaming in 1440p with real-time ray tracing and DLSS. The GIGABYTE AERO 15 is available in a variety of configurations from Amazon, giving creative gamers a range of options.

The NVIDIA Broadcast app, free to download for RTX GPU owners, has several powerful AI features. Audio effects such as noise and echo removal, paired with visual effects like virtual background and auto frame, deliver professional-grade visuals with a single click.

For a break from class, Studio laptops deliver phenomenal PC gaming experiences.

The NVIDIA Studio Advantage 

Studio laptops come with a serious advantage for content creators and students — NVIDIA Studio technology to speed up content creation, including AI tools for editing and exclusive apps that elevate content.

Studio laptops come equipped with the most advanced hardware for content creation, powered by RTX GPUs with dedicated hardware for 3D workflows, video and AI.

The Studio suite of apps, exclusively available and free to RTX GPU owners, includes NVIDIA Omniverse for collaborative 3D editing, Broadcast for live streaming AI tools and Canvas for painting beautiful landscapes with AI.

Studio laptops and popular creative apps are supported by NVIDIA Studio Drivers — which come preinstalled to optimize creative apps and are extensively tested to deliver maximum reliability.

Artists looking to sharpen their skills can also access the NVIDIA Studio YouTube channel, an ever-growing collection of step-by-step tutorials from renowned artists, inspiring community showcases and more, assisting in content-creation education.

And, for a limited time, creators can purchase a Studio laptop and get Adobe Creative Cloud free for three months — a $238 value. This offer is valid for new and existing customers.

Learn how Studio systems take content creation to the next level. Check out the compare GPU page for a deeper dive, including options for professionals.

Check out the weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Get updates directly by signing up for the NVIDIA Studio newsletter.

The post NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class appeared first on NVIDIA Blog.

How’s That? Startup Ups Game for Cricket, Football and More With Vision AI

Sports produce a slew of data. In a game of cricket, for example, each play generates millions of video-frame data points for a sports analyst to scrutinize, according to Masoumeh Izadi, managing director of deep-tech startup TVConal.

The Singapore-based company uses NVIDIA AI and computer vision to power its sports video analytics platform, which enables users — including sports teams, leagues and TV broadcasters — to gain performance insights from these massive amounts of data in real time.

Short for Television Content Analytics, TVConal provides video analytics for a variety of sports, with a focus on cricket, tennis, badminton and football.

Its platform — powered by the NVIDIA Metropolis application framework for vision AI — can detect important in-game events, model athlete behavior, make movement predictions and more. It all helps dissect the minute details in sports, enabling teams to make smarter decisions on the field.

TVConal is a member of NVIDIA Inception, a free program that supports startups revolutionizing industries with cutting-edge technology.

Automated Match Tagging

Match tagging — creating a timeline of significant in-game events — is crucial to sports video analytics. Tags are used to generate detailed reports that provide performance statistics and visual feedback for referees, coaches, athletes and fans.

Since plays and other in-game events occur in mere instants, up to 20 loggers work together to accomplish live tagging for some sports matches, according to Izadi. This can be time consuming and labor intensive.

With TVConal’s platform, sports analysts can extract insights from video frames with just a few clicks — as AI helps to automatically and accurately tag matches in real time. This gives analysts the time to dig deeper into the data and provide more detailed feedback for teams.

The platform can also catch critical moments or foul plays that the naked eye might miss.

“If a player does an illegal action that’s beyond a human’s ability to process in a few milliseconds, the platform can detect that and inform the umpires to take an action just in time,” Izadi said.

TVConal’s platform is built using NVIDIA Metropolis, which simplifies the development, deployment and scale of AI-enabled video analytics applications from edge to cloud. Metropolis includes pretrained models, training and optimization tools, software development kits, CUDA-X libraries and more — all optimized to run on NVIDIA-Certified Systems based on the NVIDIA EGX enterprise platform for accelerated computing.

“NVIDIA’s software tools, frameworks and hardware allow us to iterate faster and bring ideas to market with shortened life cycles and reduced costs,” Izadi said.

NVIDIA GPU-accelerated compute resources used in TVConal’s platform include the NVIDIA Jetson platform for AI at the edge, RTX 3090 workstations on-prem and Tesla V100 and A100 in the cloud.  

TVConal uses the NVIDIA DeepStream SDK to simplify video processing pipelines; NVIDIA pretrained models and the TAO toolkit to accelerate AI training; and the NVIDIA TensorRT SDK to optimize inference.

DeepStream enabled the TVConal team to process live video and audio streams in real time — the necessary speed to match video frame rates. In addition, the TensorRT library helped TVConal convert its machine learning models to more quickly process data, while maintaining accuracy.

And as a member of NVIDIA Inception, TVConal has access to technical resources, industry experts and go-to-market support.

The company’s clients include international production company NEP Group, the Pakistan Cricket Board and others.

“There is an increasing volume of sports content to extract value from,” said Izadi, highlighting that the global sports analytics market size is expected to grow over 20% by 2028. “Automated video processing is revolutionary in sports, and we are excited to build more advanced models and pipelines to keep the revolution going.”

More innovative players worldwide are using NVIDIA Metropolis for sports analytics, including startups Pixellot, Track160 and Veo.

Watch an on-demand NVIDIA GTC session about how AI is revolutionizing the sports industry — better predicting competition outcomes, improving performance and increasing viewers’ quality expectations.

Learn more about NVIDIA Metropolis and apply to join NVIDIA Inception.

The post How’s That? Startup Ups Game for Cricket, Football and More With Vision AI appeared first on NVIDIA Blog.

What Is an Exaflop?

Computers are crunching more numbers than ever to crack the most complex problems of our time — how to cure diseases like COVID and cancer, mitigate climate change and more.

These and other grand challenges ushered computing into today’s exascale era when top performance is often measured in exaflops.

So, What’s an Exaflop?

An exaflop is a measure of performance for a supercomputer that can calculate at least 1018 or one quintillion floating point operations per second.

In exaflop, the exa- prefix means a quintillion, that’s a billion billion, or one followed by 18 zeros. Similarly, an exabyte is a memory subsystem packing a quintillion bytes of data.

The “flop” in exaflop is an abbreviation for floating point operations. The rate at which a system executes a flop in seconds is measured in exaflop/s.

Floating point refers to calculations made where all the numbers are expressed with decimal points.

1,000 Petaflops = an Exaflop

The prefix peta- means 1015, or one with 15 zeros behind it. So, an exaflop is a thousand petaflops.

To get a sense of what a heady calculation an exaflop is, imagine a billion people, each holding a billion calculators. (Clearly, they’ve got big hands!)

If they all hit the equal sign at the same time, they’d execute one exaflop.

Indiana University, home to the Big Red 200 and several other supercomputers, puts it this way: To match what an exaflop computer can do in just one second, you’d have to perform one calculation every second for 31,688,765,000 years.

A Brief History of the Exaflop

For most of supercomputing’s history, a flop was a flop, a reality that’s morphing as workloads embrace AI.

People used numbers expressed in the highest of several precision formats, called double precision, as defined by the IEEE Standard for Floating Point Arithmetic. It’s dubbed double precision, or FP64, because each number in a calculation requires 64 bits, data nuggets expressed as a zero or one. By contrast, single precision uses 32 bits.

Double precision uses those 64 bits to ensure each number is accurate to a tiny fraction. It’s like saying 1.0001 + 1.0001 = 2.0002, instead of 1 + 1 = 2.

The format is a great fit for what made up the bulk of the workloads at the time — simulations of everything, from atoms to airplanes, that need to ensure their results come close to what they represent in the real world.

So, it was natural that the LINPACK benchmark, aka HPL, that measures performance on FP64 math became the default measurement in 1993, when the TOP500 list of world’s most powerful supercomputers debuted.

The Big Bang of AI

A decade ago, the computing industry heard what NVIDIA CEO Jensen Huang describes as the big bang of AI.

This powerful new form of computing started showing significant results on scientific and business applications. And it takes advantage of some  very different mathematical methods.

Deep learning is not about simulating real-world objects; it’s about sifting through mountains of data to find patterns that enable fresh insights.

Its math demands high throughput, so doing many, many calculations with simplified numbers (like 1.01 instead of 1.0001) is much better than doing fewer calculations with more complex ones.

That’s why AI uses lower precision formats like FP32, FP16 and FP8. Their 32-, 16- and 8-bit numbers let users do more calculations faster.

Mixed Precision Evolves

For AI, using 64-bit numbers would be like taking your whole closet when going away for the weekend.

Finding the ideal lower-precision technique for AI is an active area of research.

For example, the first NVIDIA Tensor Core GPU, Volta, used mixed precision. It executed matrix multiplication in FP16, then accumulated the results in FP32 for higher accuracy.

Hopper Accelerates With FP8

More recently, the NVIDIA Hopper architecture debuted with a lower-precision method for training AI that’s even faster. The Hopper Transformer Engine automatically analyzes a workload, adopts FP8 whenever possible and accumulates results in FP32.

When it comes to the less compute-intensive job of inference — running AI models in production — major frameworks such as TensorFlow and PyTorch support 8-bit integer numbers for fast performance. That’s because they don’t need decimal points to do their work.

The good news is NVIDIA GPUs support all precision formats (above), so users can accelerate every workload optimally.

Last year, the IEEE P3109 committee started work on an industry standard for precision formats used in machine learning. This work could take another year or two.

Some Sims Shine at Lower Precision

While FP64 remains popular for simulations, many use lower-precision math when it delivers useful results faster.

HPC apps vary in the factors that impact their performance.

For example, researchers run in FP32 a popular simulator for car crashes, LS-Dyna from Ansys. Genomics is another field that tends to prefer lower-precision math.

In addition, many traditional simulations are starting to adopt AI for at least part of their workflows. As workloads shift towards AI,  supercomputers need to support lower precision to run these emerging applications well.

Benchmarks Evolve With Workloads

Recognizing these changes, researchers including Jack Dongarra — the 2021 Turing award winner and a contributor to HPL — debuted HPL-AI in 2019. It’s a new benchmark that’s better for measuring these new workloads.

“Mixed-precision techniques have become increasingly important to improve the computing efficiency of supercomputers, both for traditional simulations with iterative refinement techniques as well as for AI applications,” Dongarra said in a 2019 blog. “Just as HPL allows benchmarking of double-precision capabilities, this new approach based on HPL allows benchmarking of mixed-precision capabilities of supercomputers at scale.”

Thomas Lippert, director of the Jülich Supercomputing Center, agreed.

“We’re using the HPL-AI benchmark because it’s a good measure of the mixed-precision work in a growing number of our AI and scientific workloads — and it reflects accurate 64-bit floating point results, too,” he said in a blog posted last year.

Today’s Exaflop Systems

In a June report, 20 supercomputer centers around the world reported their HPL-AI results, three of them delivering more than an exaflop.

One of those systems, a supercomputer at Oak Ridge National Laboratory, also exceeded an exaflop in FP64 performance on HPL.

A sampler of the June 2022 HPL-AI results.

Two years ago, a very unconventional system was the first to hit an exaflop. The crowd-sourced supercomputer assembled by the Folding@home consortium passed the milestone after it put out a call for help fighting the COVID-19 pandemic and was deluged with donated time on more than a million computers.

Exaflop in Theory and Practice

Since then, many organizations have installed supercomputers that deliver more than an exaflop in theoretical peak performance. It’s worth noting that the TOP500 list reports both Rmax (actual) and Rpeak (theoretical) scores.

Rmax is simply the best performance a computer actually demonstrated.

Rpeak is a system’s top theoretical performance if everything could run at its highest possible level, something that almost never really happens. It’s typically calculated by multiplying the number of processors in a system by their clock speed, then multiplying the result by the number of floating point operations the processors can perform in one second.

So, if someone says their system can do an exaflop, consider asking if that’s using Rmax (actual) or Rpeak (theoretical).

Many Metrics in the Exaflop Age

It’s another one of the many nuances in this new exascale era.

And it’s worth noting that HPL and HPL-AI are synthetic benchmarks, meaning they measure performance on math routines, not real-world applications. Other benchmarks, like MLPerf, are based on real-world workloads.

In the end, the best measure of a system’s performance, of course, is how well it runs a user’s applications. That’s a measure not based on exaflops, but on ROI.

The post What Is an Exaflop? appeared first on NVIDIA Blog.