NVIDIA CEO Ignites AI Conversation in Stockholm

More than 600 entrepreneurs, developers, researchers and executives from across the Nordics flocked Tuesday to Stockholm’s sleek Sergel Hub conference center in a further sign of the strength of the region’s AI ecosystem.

The highlight: a far-reaching conversation between NVIDIA founder and CEO Jensen Huang and Swedish industrialist Marcus Wallenberg exploring the intersections of AI, green computing, and Scandinavia’s broader tech scene.

“This generative AI phenomenon is creating a whole slew of new startups, new ideas, new video editing, image editing, new text,” Huang said. “It can achieve capabilities that previous computing platforms cannot.”

The Berzelius supercomputer, named for Jöns Jacob Berzelius, one of the fathers of modern chemistry, has just been upgraded to 94 NVIDIA DGX A100 AI computing systems, delivering nearly half an exaflop of AI performance, placing it among the world’s 100 fastest AI supercomputers.

“Years ago, Marcus and I started talking about a new way of doing computer science. Having a key instrument, like Berzelius, would be a fundamental instrument of future science,” Huang told the audience. “The work that is done on this instrument would make tremendous impacts to life sciences, material sciences, physical sciences and computer science.”

Maximum Efficiency, Minimal Impact

The rising use of electricity is one of the causes of global warming, and powerful, energy-efficient computers are crucial to fighting climate change through green computing.

Huang explained that whether for data centers or the latest smartphone, computer chips, systems and software must be designed and used to maximize energy efficiency and minimize environmental impact.

“Companies large and small have to sign up for the carbon footprint that we use to build the work that we do,” said Huang. “If there’s an opportunity for us to help accelerate workloads and reduce energy use and improve energy efficiency, we will.”

Sweden’s Role in AI

The upgrade comes as AI is powering change in every industry across the globe, with leaders from across the Nordics accelerating the growth of some of the world’s most powerful AI solutions, explained Wallenberg.

“From the perspective of the foundations, we’re trying to work for the betterment of Sweden by promoting the areas of research, technology and medicine,” said Wallenberg, whose family has for generations been deeply involved across the nation’s economy. “We are working together as a team to create possibilities and the foundations for more work to be done.”

The Berzelius system was used for training the first Swedish large language model. Increasing in size 10x every year for the last few years, large language models are just one state-of-the-art AI technology that promises transformation through learned knowledge.

Neural networks trained with massive datasets on powerful systems, LLMs are accelerating discoveries across industries such as healthcare and climate science with software frameworks like NVIDIA BioNeMo. Models like ChatGPT are making a name for themselves as a new way to use AI.

“You can connect models together to retrieve new information so that models like ChatGPT could report on the news today, who won that game, or the latest weather,” Huang said. “The combination of these capabilities means not only the ability to respond and answer questions and write stories, but it can also write programs and solve problems.”

Knowledge From Data

Solving problems requires reliable, physically accurate data. The industrial metaverse, where digital twins of real factories, rail networks or retail stores can be created, is already being used by large companies like Amazon, BMW, Ericsson and Siemens.

Following the conversation between Huang and Wallenberg, Staffan Truvé, CTO and co-founder of cybersecurity company Recorded Future, talked about how data can be used to model intelligence as a digital twin to get an end-to-end view of threats and targets.

“Today, there are three major converging threat areas. Physical, cyber and influence, which is the threat to our brains,” Truvé explained. “By creating an intelligence graph, we’re building a full picture of a threat.”

Digital twins are not the only way to gather valuable insights when developing for the future. Sara Mazur, vice executive director of the Knut and Alice Wallenberg Foundation and chair of the Wallenberg AI Autonomous Systems and Software Program, highlighted the importance of collaboration between academia and industry.

Supersizing AI: Sweden Turbocharges Its Innovation Engine

Sweden is outfitting its AI supercomputer for a journey to the cutting edge of machine learning, robotics and healthcare.

It couldn’t ask for a better guide than Anders Ynnerman (above). His signature blue suit, black spectacles and gentle voice act as calm camouflage for a pioneering spirit.

Early on, he showed a deep interest in space, but his career took a different direction. He established the country’s first network of supercomputing centers and went on to pioneer scientific visualization technologies used in hospitals and museums around the world.

Today, he leads Sweden’s largest research effort, WASP — the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program — focused on AI innovation.

The Big Picture

“This is a year when people are turning their focus to sustainability challenges we face as a planet,” said the Linköping University professor. “Without advances in AI and other innovations, we won’t have a sustainable future.”

To supercharge environmental efforts and more, Sweden will upgrade its Berzelius supercomputer. Based on the NVIDIA DGX SuperPOD, it will deliver nearly half an exaflop of AI performance, placing it among the world’s 100 fastest AI supercomputers.

“A machine like Berzelius is fundamental not only for the results it delivers, but the way it catalyzes expertise in Sweden,” he said. “We’re a knowledge-driven nation, so our researchers and companies need access to the latest technology to compete.”

AI Learns Swedish

In June, the system trained GPT-SW3, a family of large language models capable of drafting a speech or answering questions in Swedish.

Today, a more powerful version sports 20 billion parameters, a popular measure of a neural network’s smarts. It can help developers write software and handle other complex tasks.

Long term, researchers aim to train a version with a whopping 175 billion parameters that’s also fluent in Nordic languages like Danish and Norwegian.

One of Sweden’s largest banks is already exploring use of the latest GPT-SW3 variant for a chatbot and other applications.

A Memory Boost

To build big AIs, Berzelius will add 34 NVIDIA DGX A100 systems to its cluster of 60 that makeup the SuperPOD. The new units will sport GPUs with 80GB of memory each.

Ynnerman with Berzelius at the system’s March 2021 launch.

“Having really fat nodes with large memory is important for some of these models,” Ynnerman said. Atos, the system integrator, is providing “a very smooth ride getting the whole process set up,” he added.

Seeking a Cure for Cancer

In healthcare, a data-driven life sciences program, funded by the Wallenberg Foundation, will be a major Berzelius user. The program spans 10 universities and will, among other applications, employ AI to understand protein folding, fundamentally important to understanding diseases like cancer.

Others will use Berzelius to improve detection of cancer cells and navigate the massive mounds of data in human genomes.

Some researchers are exploring tools such as NVIDIA Omniverse Avatar Cloud Engine and NVIDIA BotMaker to create animated patients. Powered by GPT-SW3, they could help doctors practice telemedicine skills.

Robots in Zero Gravity

Sweden’s work in image and video recognition will get a boost from Berzelius. Such algorithms advance work on the autonomous systems used in modern factories and warehouses.

One project is exploring how autonomous systems act in space and undersea. It’s a topic close to the heart of a recent addition to WASP, researcher Christer Fuglesang, who was named Sweden’s first astronaut in 1992.

Fuglesang went to the International Space Station in 2006 and 2008. Later, as a professor of physics at Sweden’s Royal Institute of Technology, he collaborated with Ynnerman on live shows about life in space, presented in the WISDOME dome theater at the Visualization Center C Ynnerman founded and directs.

Thanks to his expertise in visualization, “I can go to Mars whenever I want,” Ynnerman quipped.

He took NVIDIA founder and CEO Jensen Huang and Marcus Wallenberg — scion of Sweden’s leading industrial family — on a tour of outer space at the dome to mark the Berzelius upgrade. The dome can show the Martian surface in 8K resolution at 120 frames per second, thanks to its use of 12 NVIDIA Quadro RTX 8000 GPUs.

Inspiring the Next Generation

Ynnerman’s algorithms have touched millions who’ve seen visualizations of Egyptian mummies at the British Museum.

“That makes me even more proud than some of my research papers because many are young people we can inspire with a love for science and technology,” he said.

A passion for science and technology has attracted more than 400 active Ph.D. candidates so far to WASP, which is on the way to exceeding its goal of 600 grads by 2031.

But even a visualization specialist can’t be everywhere. So Ynnerman’s pet project will use AI to create a vibrant, virtual museum guide.

“I think we can provide more people a ‘wow’ experience — I want a copilot when I’m navigating the universe,” he said.

3D Artist Enters the Node Zone, Creating Alien Artifacts This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Artist Ducky 3D creates immersive experiences through vibrant visuals and beautiful 3D environments in the alien-inspired animation Stylized Alien Landscape — this week In the NVIDIA Studio.

Ducky 3D is a modern Renaissance man who works with musicians from around the world, creating tour packages and visual art related to the music industry. As a 3D fanatic who specializes in Blender, he often guides emerging and advanced 3D artists to new creative heights. It’s no surprise that his weekly YouTube tutorials have garnered over 400,000 subscribers.

Stylized Alien Landscape is uniquely individualistic and was built entirely in Blender using geometry nodes, or geo nodes.

Geo nodes can add organic style and customization to Blender scenes and animation.

The use of geo nodes in Blender has recently skyrocketed. That’s because they virtually make modeling a completely procedural process — allowing for non-linear, non-destructive workflows and the instancing of objects — to create incredibly detailed scenes using small amounts of data. Geo nodes can also organically modify all types of geometry, including meshes, curves, volumes, instances and more. Many of these were edited in the making of Stylized Alien Landscape.

Ducky 3D opened a new scene, created a simple cube and applied several popular geo nodes, including random value, triangulate and dual mesh. By simple trial and error with numeric values, he was able to create a provocative, alien-inspired visual.

“I use geometry nodes to take advantage of the dual mesh, which creates organic shapes by manipulating with simple deformations,” he said.

Ducky 3D’s GeForce 4090 RTX GPU ensured smooth movement in the viewport with virtually no noise.

Simply adding a transform node to the mix got the animation going. Ducky 3D then copied all nodes and scaled the duplicated render to create two animations rotating simultaneously.

Next, Ducky 3D turned his focus to lighting the object, selecting the Blender Cycles renderer to do so.

“Rendering lighting is drastically better in Cycles, but you do you,” he said with candor.

Blender Cycles RTX-accelerated OptiX ray tracing in the viewport unlocks interactive, photoreal rendering for modeling and animation work.

Ducky 3D applies shading nodes to “Stylized Alien Landscape.”  

Here, Ducky 3D can quickly create more realism in two ways: adding depth of field by playing with distance options and the flat shaded view, and bringing the background out of focus and the object into focus.

Volume “just makes things look cool,” Ducky 3D added. Selecting the world and clicking principled volume made the scene nearly photorealistic.

With the help of geo nodes, Ducky 3D refined the texture to his desired effect, using the bump node, color ramp and noise texture.

For more on the making of Stylized Alien Landscape, check out the video below.

“I needed my viewport to perform well enough to see detail through the added volume,” he said. “Thank goodness for the AI-powered NVIDIA OptiX ray tracing API that my GeForce RTX 4090 GPU enables.”

Ducky 3D accomplished the slightly odd atmosphere that he wanted for his piece through the addition of fog.

“Fog is tough to render, and the GPU helped me see my viewport clearly,” he said.

3D artist Ducky 3D’s workstation. 


For more Blender tutorials, check out Ducky 3D’s YouTube channel or the NVIDIA Studio Blender playlist.

Enter the #NewYearNewArt Challenge 

A new year comes with new art, and we’d love to see yours! Use the hashtag #NewYearNewArt and tag @NVIDIAStudio to show off your most recent creations for a chance to be featured on our channels.

A new year means new art!

Join our Jan-Feb #NewYearNewArt challenge by sharing any new or relatively new art you’ve created for a chance to be featured on our channels!

Be sure to tag #NewYearNewArt and thanks to @AOi__Pan for sharing their new art. pic.twitter.com/lXiFLROhQh

— NVIDIA Studio (@NVIDIAStudio) January 10, 2023

There have been stunning animations like this lively work from the amazing @stillmanvisual.

Synapses (2023).

Little collaboration with @NVIDIAStudio for their #NewYearNewArt challenge pic.twitter.com/ZI6vkBLNGC

— 𝑺𝑻𝑰𝑳𝑳𝑴𝑨𝑵 (@stillmanvisual) January 23, 2023

There’s also explosive new content from @TheRealYarnHub featuring some action-packed, historically-based battles.

New video coming out tomorrow on @YouTube @NVIDIAStudio #NewYearNewArt @UnrealEngine @RoyalNavy @TeamYouTube @YouTubeCreators pic.twitter.com/iEdZGbrUGI

— Yarnhub (@TheRealYarnHub) January 19, 2023

Catch even more #NewYearNewArt entries from other creators on the NVIDIA Studio Instagram stories.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Fresh AI on Security: Digital Fingerprinting Deters Identity Attacks

Add AI to the list of defenses against identity attacks, one of the most common and hardest breach to prevent.

More than 40% of all data compromises involved stolen credentials, according to the 2022 Verizon Data Breach Investigations Report. And a whopping 80% of all web application breaches involved credential abuse.

“Credentials are the favorite data type of criminal actors because they are so useful for masquerading as legitimate users on the system,” the report said.

In today’s age of zero trust, security experts say it’s not a matter of if but when they’ll experience an identity attack.

A Response From R&D

The director of cybersecurity engineering and R&D at NVIDIA, Bartley Richardson, articulates the challenge simply.

“We need to look for when Bartley is not acting like Bartley,” he said.

Last year, his team described a concept called digital fingerprinting. In the wake of highly publicized attacks in February, he came up with a simple but ambitious idea for implementing it.

A Big Ask

He called a quick meeting with his two tech leads to share the idea. Richardson told them he wanted to create a deep learning model for every account, server, application and device on the network.

The models would learn individual behavior patterns and alert security staff when an account was acting in an uncharacteristic way. That’s how they would deter attacks.

The tech leads thought it was a crazy idea. It was computationally impossible, they told him, and no one was even using GPUs for security yet.

Richardson listened to their concerns and slowly convinced them it was worth a try. They would start with just a model for every account.

Everybody’s Problem

Security managers know it’s a big-data problem.

Companies collect terabytes of data on network events every day. That’s just a fraction of the petabytes of events a day companies could log if they had the resources, according to Daniel Rohrer, NVIDIA’s vice president of software product security.

The fact that it’s a big-data problem is also good news, Rohrer said in a talk at GTC in September (watch free with registration). “We’re already well on the way to combining our cybersecurity and AI efforts,” he said.

Starting With a Proof of Concept

By mid-March, Richardson’s team was focused on ways to run thousands of AI models in tandem. They used NVIDIA Morpheus, an AI security software library announced a year earlier, to build a proof of concept in two months.

Once an entire, albeit crude, product was done, they spent another two months optimizing each portion.

Then they reached out to about 50 NVIDIANs to review their work — security operations and product security teams, and IT folks who would be alpha users.

An Initial Deployment

Three months later, in early October, they had a solution NVIDIA could deploy on its global networks — security software for AI-powered digital fingerprinting.

The software is a kind of LEGO kit, an AI framework anyone can use to create a custom cybersecurity solution.

Version 2.0 is running across NVIDIA’s networks today on just four NVIDIA A100 Tensor Core GPUs. IT staff can create their own models, changing aspects of them to create specific alerts.

Tested and Released

NVIDIA is making these capabilities available in a digital fingerprinting AI workflow included with NVIDIA AI Enterprise 3.0 announced in December.

For identity attackers, “the models Bartley’s team built have anomaly scores that are off the charts, and we’re able to visualize events so we can see things in new ways,” said Jason Recla, NVIDIA’s senior director of information security.

As a result, instead of facing a tsunami of 100 million network events a week, an IT team may have just 8-10 incidents to investigate daily. That cuts the time to detect certain attack patterns from weeks to minutes.

Tailoring AI for Small Events

The team already has big ideas for future versions.

“Our software works well on major identity attacks, but it’s not every day you have an incident like that,” Richardson said. “So, now we’re tuning it with other models to make it more applicable to everyday vanilla security incidents.”

Meanwhile, Richardson’s team used the software to create a proof of concept for a large consulting firm.

“They wanted it to handle a million records in a tenth of a second. We did it in a millionth of a second, so they’re fully on board,” Richardson said.

The Outlook for AI Security

Looking ahead, the team has ideas for applying AI and accelerated computing to secure digital identities and generate hard-to-find training data.

Richardson imagines passwords and multi-factor authentication will be replaced by models that know how fast a person types, with how many typos, what services they use and when they use them. Such detailed digital identities will prevent attackers from hijacking accounts and pretending they are legitimate users.

Data on network events is gold for building AI models that harden networks, but no one wants to share details of real users and break-ins. Synthetic data, generated by a variant of digital fingerprinting, could fill the gap, letting users create what they need to fit their use case.

In the meantime, Recla has advice security managers can act on now.

“Get up to speed on AI,” he said. “Start investing in AI engineering and data science skills — that’s the biggest thing.”

Digital fingerprinting is not a panacea. It’s one more brick in an ever-evolving digital wall that a community of security specialists is building against the next big attack.

You can try this AI-powered security workflow live on NVIDIA LaunchPad starting Jan. 23. And you can watch the video below to learn more about digital fingerprinting.

Booked for Brilliance: Sweden’s National Library Turns Page to AI to Parse Centuries of Data

For the past 500 years, the National Library of Sweden has collected virtually every word published in Swedish, from priceless medieval manuscripts to present-day pizza menus.

Thanks to a centuries-old law that requires a copy of everything published in Swedish to be submitted to the library — also known as Kungliga biblioteket, or KB — its collections span from the obvious to the obscure: books, newspapers, radio and TV broadcasts, internet content, Ph.D. dissertations, postcards, menus and video games. It’s a wildly diverse collection of nearly 26 petabytes of data, ideal for training state-of-the-art AI.

“We can build state-of-the-art AI models for the Swedish language since we have the best data,” said Love Börjeson, director of KBLab, the library’s data lab.

Using NVIDIA DGX systems, the group has developed more than two dozen open-source transformer models, available on Hugging Face. The models, downloaded by up to 200,000 developers per month, enable research at the library and other academic institutions.

“Before our lab was created, researchers couldn’t access a dataset at the library — they’d have to look at a single object at a time,” Börjeson said. “There was a need for the library to create datasets that enabled researchers to conduct quantity-oriented research.”

With this, researchers will soon be able to create hyper-specialized datasets — for example, pulling up every Swedish postcard that depicts a church, every text written in a particular style or every mention of a historical figure across books, newspaper articles and TV broadcasts.

Turning Library Archives Into AI Training Data

The library’s datasets represent the full diversity of the Swedish language — including its formal and informal variations, regional dialects and changes over time.

“Our inflow is continuous and growing — every month, we see more than 50 terabytes of new data,” said Börjeson. “Between the exponential growth of digital data and ongoing work digitizing physical collections that date back hundreds of years, we’ll never be finished adding to our collections.”

The library’s archives include audio, text and video.

Soon after KBLab was established in 2019, Börjeson saw the potential for training transformer language models on the library’s vast archives. He was inspired by an early, multilingual, natural language processing model by Google that included 5GB of Swedish text.

KBLab’s first model used 4x as much — and the team now aims to train its models on at least a terabyte of Swedish text. The lab began experimenting by adding Dutch, German and Norwegian content to its datasets after finding that a multilingual dataset may improve the AI’s performance.

NVIDIA AI, GPUs Accelerate Model Development 

The lab started out using consumer-grade NVIDIA GPUs, but Börjeson soon discovered his team needed data-center-scale compute to train larger models.

“We realized we can’t keep up if we try to do this on small workstations,” said Börjeson. “It was a no-brainer to go for NVIDIA DGX. There’s a lot we wouldn’t be able to do at all without the DGX systems.”

The lab has two NVIDIA DGX systems from Swedish provider AddPro for on-premises AI development. The systems are used to handle sensitive data, conduct large-scale experiments and fine-tune models. They’re also used to prepare for even larger runs on massive, GPU-based supercomputers across the European Union — including the MeluXina system in Luxembourg.

“Our work on the DGX systems is critically important, because once we’re in a high-performance computing environment, we want to hit the ground running,” said Börjeson. “We have to use the supercomputer to its fullest extent.”

The team has also adopted NVIDIA NeMo Megatron, a PyTorch-based framework for training large language models, with NVIDIA CUDA and the NVIDIA NCCL library under the hood to optimize GPU usage in multi-node systems.

“We rely to a large extent on the NVIDIA frameworks,” Börjeson said. “It’s one of the big advantages of NVIDIA for us, as a small lab that doesn’t have 50 engineers available to optimize AI training for every project.”

Harnessing Multimodal Data for Humanities Research

In addition to transformer models that understand Swedish text, KBLab has an AI tool that transcribes sound to text, enabling the library to transcribe its vast collection of radio broadcasts so that researchers can search the audio records for specific content.

AI-enhanced databases are the latest evolution of library records, which were long stored in physical card catalogs.

KBLab is also starting to develop generative text models and is working on an AI model that could process videos and create automatic descriptions of their content.

“We also want to link all the different modalities,” Börjeson said. “When you search the library’s databases for a specific term, we should be able to return results that include text, audio and video.”

KBLab has partnered with researchers at the University of Gothenburg, who are developing downstream apps using the lab’s models to conduct linguistic research — including a project supporting the Swedish Academy’s work to modernize its data-driven techniques for creating Swedish dictionaries.

“The societal benefits of these models are much larger than we initially expected,” Börjeson said.

Images courtesy of Kungliga biblioteket

What Is AI Computing?

The abacus, sextant, slide rule and computer. Mathematical instruments mark the history of human progress.

They’ve enabled trade and helped navigate oceans, and advanced understanding and quality of life.

The latest tool propelling science and industry is AI computing.

AI Computing Defined

AI computing is the math-intensive process of calculating machine learning algorithms, typically using accelerated systems and software. It can extract fresh insights from massive datasets, learning new skills along the way.

It’s the most transformational technology of our time because we live in a data-centric era, and AI computing can find patterns no human could.

For example, American Express uses AI computing to detect fraud in billions of annual credit card transactions. Doctors use it to find tumors, finding tiny anomalies in mountains of medical images.

Three Steps to AI Computing

Before getting into the many use cases for AI computing, let’s explore how it works.

First, users, often data scientists, curate and prepare datasets, a stage called extract/transform/load, or ETL. This work can now be accelerated on NVIDIA GPUs with Apache Spark 3.0, one of the most popular open source engines for mining big data.

Second, data scientists choose or design AI models that best suit their applications.

Some companies design and train their own models from the ground up because they are pioneering a new field or seeking a competitive advantage. This process requires some expertise and potentially an AI supercomputer, capabilities NVIDIA offers.

Machine learning operations (MLOps) describe in finer detail the three major steps of AI computing — ETL (top row), training (lower right) and inference (lower left).

Many companies choose pretrained AI models they can customize as needed for their applications. NVIDIA provides dozens of pretrained models and tools for customizing them on NGC, a portal for software, services, and support.

Third, companies sift their data through their models. This key step, called inference, is where AI delivers actionable insights.

The three-step process involves hard work, but there’s help available, so everyone can use AI computing.

For example, NVIDIA TAO Toolkit can collapse the three steps into one using transfer learning, a way of tailoring an existing AI model for a new application without needing a large dataset. In addition, NVIDIA LaunchPad gives users hands-on training in deploying models for a wide variety of use cases.

Inside an AI Model

AI models are called neural networks because they’re inspired by the web-like connections in the human brain.

If you slice into one of these AI models, it might look like a mathematical lasagna, made up of layers of linear algebra equations. One of the most popular forms of AI is called deep learning because it uses many layers.

An example of a deep learning model that identifies an image. From an article on deep learning for the U.S. National Academy of Sciences. Image credit: Lucy Reading-Ikkanda (artist).

If you zoom in, you’d see each layer is made up of stacks of equations. Each represents the likelihood that one piece of data is related to another.

AI computing multiplies together every stack of equations in every layer to find patterns. It’s a huge job that requires highly parallel processors sharing massive amounts of data on fast computer networks.

GPU Computing Meets AI

GPUs are the de facto engines of AI computing.

NVIDIA debuted the first GPU in 1999 to render 3D images for video games, a job that required massively parallel calculations.

GPU computing soon spread to use in graphics servers for blockbuster movies. Scientists and researchers packed GPUs into the world’s largest supercomputers to study everything from the chemistry of tiny molecules to the astrophysics of distant galaxies.

When AI computing emerged more than a decade ago, researchers were quick to embrace NVIDIA’s programmable platform for parallel processing. The video below celebrates this brief history of the GPU.

The History of AI Computing

The idea of artificial intelligence goes back at least as far as Alan Turing, the British mathematician who helped crack coded messages during WWII.

“What we want is a machine that can learn from experience,” Turing said in a 1947 lecture in London.

Alan Turing

Acknowledging his insights, NVIDIA named one of its computing architectures for him.

Turing’s vision became a reality in 2012 when researchers developed AI models that could recognize images faster and more accurately than humans could. Results from the ImageNet competition also greatly accelerated progress in computer vision.

Today, companies such as Landing AI, founded by machine learning luminary Andrew Ng, are applying AI and computer vision to make manufacturing more efficient. And AI is bringing human-like vision to sports, smart cities and more.

AI Computing Starts Up Conversational AI

AI computing made huge inroads in natural language processing after the invention of the transformer model in 2017. It debuted a machine-learning technique called “attention” that can capture context in sequential data like text and speech.

Today, conversational AI is widespread. It parses sentences users type into search boxes. It reads text messages when you’re driving, and lets you dictate responses.

These large language models are also finding applications in drug discovery, translation, chatbots, software development, call center automation and more.

AI + Graphics Create 3D Worlds

Users in many, often unexpected, areas are feeling the power of AI computing.

The latest video games achieve new levels of realism thanks to real-time ray tracing and NVIDIA DLSS, which uses AI to deliver ultra-smooth game play on the GeForce RTX platform.

That’s just the start. The emerging field of neural graphics will speed the creation of virtual worlds to populate the metaverse, the 3D evolution of the internet.

Neural graphics accelerate design and development of virtual worlds to populate the metaverse, the 3D internet.

To kickstart that work, NVIDIA released several neural graphics tools in August.

Use Cases for AI Computing

Cars, Factories and Warehouses

Car makers are embracing AI computing to deliver a smoother, safer driving experience and deliver smart infotainment capabilities for passengers.

Mercedes-Benz is working with NVIDIA to develop software-defined vehicles. Its upcoming fleets will deliver intelligent and automated driving capabilities powered by an NVIDIA DRIVE Orin centralized computer. The systems will be tested and validated in the data center using DRIVE Sim software, built on NVIDIA Omniverse, to ensure they can safely handle all types of scenarios.

At CES, the automaker announced it will also use Omniverse to design and plan manufacturing and assembly facilities at its sites worldwide.

BMW Group is also among many companies creating AI-enabled digital twins of factories in NVIDIA Omniverse, making plants more efficient. It’s an approach also adopted by consumer giants such as PepsiCo for its logistic centers as shown in the video below.

Inside factories and warehouses, autonomous robots further enhance efficiency in manufacturing and logistics. Many are powered by the NVIDIA Jetson edge AI platform and trained with AI in simulations and digital twins using NVIDIA Isaac Sim.

In 2022, even tractors and lawn mowers became autonomous with AI.

In December, Monarch Tractor, a startup based in Livermore, Calif., released an AI-powered electric vehicle to bring automation to agriculture. In May, Scythe, based in Boulder, Colo., debuted its M.52 (below), an autonomous electric lawn mower packing eight cameras and more than a dozen sensors.

Securing Networks, Sequencing Genes

The number and variety of use cases for AI computing are staggering.

Cybersecurity software detects phishing and other network threats faster with AI-based techniques like digital fingerprinting.

In healthcare, researchers broke a record in January 2022 sequencing a whole genome in well under eight hours thanks to AI computing. Their work (described in the video below) could lead to cures for rare genetic diseases.

AI computing is at work in banks, retail shops and post offices. It’s used in telecom, transport and energy networks, too.

For example, the video below shows how Siemens Gamesa is using AI models to simulate wind farms and boost energy production.

As today’s AI computing techniques find new applications, researchers are inventing newer and more powerful methods.

Another powerful class of neural networks, diffusion models, became popular in 2022 because they could turn text descriptions into fascinating images. Researchers expect these models will be applied to many uses, further expanding the horizon for AI computing.

AI’s Leg Up: Startup Accelerates Robotics Simulation for $8 Trillion Food Market

Robots are finally getting a grip.

Developers have been striving to close the gap on robotic gripping for the past several years, pursuing applications for multibillion-dollar industries. Securely gripping and transferring fast-moving items on conveyor belts holds vast promise for businesses.

Soft Robotics, a Bedford, Mass., startup, is harnessing NVIDIA Isaac Sim to help close the sim to real gap for a handful of robotic gripping applications. One area is perfecting gripping for pick and placement of foods for packaging.

Food packaging and processing companies are using the startup’s mGripAI system, which combines soft grasping with 3D vision and AI to grasp delicate foods such as proteins, produce and bakery items without damage.

“We’re selling the hands, the eyes and the brains of the picking solution,” said David Weatherwax, senior director of software engineering at Soft Robotics.

Unlike other industries that have adopted robotics, the $8 trillion food market has been slow to develop robots to handle variable items in unstructured environments, says Soft Robotics.

The company, founded in 2013, recently landed $26 million in Series C funding from Tyson Ventures, Marel and Johnsonville Ventures.

Companies such as Tyson Foods and Johnsonville are betting on adoption of robotic automation to help improve safety and increase production in their facilities. Both companies rely on Soft Robotics technologies.

Soft Robotics is a member of the NVIDIA Inception program, which provides companies with GPU support and AI platforms guidance.

Getting a Grip With Synthetic Data

Soft Robotics develops unique models for every one of its gripping applications, each requiring specific datasets. And picking from piles of wet, slippery chicken and other foods can be a tricky challenge.

We’re all in on Omniverse and Isaac Sim, and that’s been working great for us, said  Weatherwax.

Utilizing Omniverse and Isaac Sim, the company can create 3D renderings of chicken parts with different backgrounds, like on conveyor belts or in bins, and with different lighting scenarios.

The company taps into Isaac Replicator to develop synthetic data, generating hundreds of thousands of images per model and distributing that among an array of instances in the cloud. Isaac Replicator is a set of tools, APIs and workflows for generating synthetic data using Isaac Sim.

It also runs pose estimation models to help its gripping system see the angle of the item to pick.

NVIDIA A100 Tensor Core GPUs on site enable Soft Robotics to run split-second inference with the unique models for each application in these food-processing facilities. Meanwhile, simulation and training in Isaac Sim offers access to NVIDIA A100 GPUs for scaling up workloads.

“Our current setup is fully synthetic, which allows us to rapidly deploy new applications,” said Weatherwax. “We’re all in on Omniverse and Isaac Sim, and that’s been working great for us.”

Solving Issues With Occlusion, Lighting 

A big challenge at Soft Robotics is solving issues with occlusion for an understanding of how different pieces of chicken stack up and overlap one another when dumped into a pile. “How those form can be pretty complex,” he said.

A key thing for us is the lighting, so the NVIDIA RTX-driven ray tracing is really important, said Weatherwax.

Glares on wet chicken can potentially throw off detection models. “A key thing for us is the lighting, so the NVIDIA RTX-driven ray tracing is really important,” he added.

But where it really gets interesting is modeling it all in 3D and figuring out in a split second which item is the least obstructed in a pile and most accessible for a robot gripper to pick and place.

Building synthetic data sets with physics-based accuracy, Omniverse enables Soft Robotics to create such environments. “One of the big challenges we have is how all these amorphous objects form into a pile.”

Boosting Production Line Pick Accuracy

Production lines in food processing plants can move fast. But robots deployed with application-specific models promise to handle as many as 100 picks per minute.

Still a work in progress, success in such tasks hinges on accurate representations of piles of items, supported by training datasets that consider every possible way items can fall into a pile.

The objective is to provide the robot with the best available pick from a complex and dynamic environment. If food items fall off the conveyor belt or otherwise become damaged, then it is considered waste, which directly impacts yield.

Driving Production Gains 

Meat-packing companies rely on lines of people for processing chicken, but like so many other industries they have faced employee shortages. Some that are building new plants for food processing can’t even attract enough workers at launch, said Weatherwax.

“They are having a lot of staffing challenges, so there’s a push to automate,” he said.

The Omniverse-driven work for food processing companies has delivered a more than 10x increase in its simulation capacity, accelerating deployments times for AI picking systems from months to days.

And that’s enabling Soft Robotics customers to get a grip on more than just deploying automated chicken-picking lines — it’s ensuring that they’re covered for an employment challenge that has hit many industries, especially those with increased injury and health risks.

“Handling raw chicken is a job better suited for a robot,” he said.

Download Isaac Sim here to use the  Replicator features.

The Ultimate Upgrade: GeForce RTX 4080 SuperPOD Rollout Begins Today

The Ultimate upgrade begins today: GeForce NOW RTX 4080 SuperPODs are now rolling out, bringing a new level of high-performance gaming to the cloud.

Ultimate members will start to see RTX 4080 performance in their region soon, and experience titles like  Warhammer 40,000: Darktide, Cyberpunk 2077, The Witcher 3: Wild Hunt and more at ultimate quality. New features are also available now for Ultimate members streaming from RTX 3080 servers, and members will be able to check GFN Thursday each week for availability updates in their regions.

Plus, get ready for 10 more supported games in the GeForce NOW library.

This Cloud Has an ‘Ultimate’ Lining

The GeForce NOW Ultimate membership brings new features and NVIDIA RTX technologies to the cloud for the first time, made possible by the NVIDIA Ada Lovelace GPU architecture.

Ultimate members receive three major streaming upgrades. The new RTX 4080 SuperPODs are capable of rendering and streaming at up to 240 frames per second. When paired with NVIDIA Reflex, it makes every moment of the action feel as if it’s on a desktop PC. And 4K gaming goes beyond fast with an upgrade to 120 fps, with support for DLSS 3 and RTX ON. Plus, for the first time, ultrawide resolutions are supported, giving members a wider point of view, at up to 3,840 x 1,600 resolution and 120 fps.

Coming to a zone near you.

Ultimate members in and around San Jose, Los Angeles, Dallas and Frankfurt, Germany, will be the first to experience the power of these RTX 4080 SuperPODs, starting today. Each week, GFN Thursday will spotlight the newest cities with upgraded servers, so make sure to check back each week to see which cities light up next on the map.

Even better: Starting today, Ultimate members streaming on RTX 3080 servers can take advantage of ultrawide resolutions and high dynamic range on the GeForce NOW PC and macOS apps. Learn more about supported resolutions and frame rates. Make sure you have the app v2.0.47.125 or later, and restart the app to see the new Ultimate features.

Don’t let this cloud pass you by — check it out and sign up. Get the Ultimate upgrade without paying the ultimate price — this highest-performance membership tier is only $19.99 per month or $99.99 for six months.

Game Related

The new year in Teyvat approaches in ‘The Exquisite Night Chimes’ update.

Celebrate the new year in Genshin Impact version 3.4, available this week. Players can explore Sumeru’s new sandstorm-ravaged desert with their favorite characters — and GeForce NOW members can play on the go with mobile touch controls.

Plus, 10 new games will be supported in the cloud this week:

Farlanders (New Release on Steam)
Surviving the Abyss (New Release on Steam)
Tortuga – A Pirate’s Tale (New Release on Epic Games, Jan. 19)
Epistory – Typing Chronicles (New Release on Epic Games, Jan. 19)
Absolute Drift (Steam)
BLACKTAIL (Steam and Epic Games)
Dwarf Fortress (Steam)
Hello Neighbor 2 (Steam and Epic Games)
NEBULOUS: Fleet Command (Steam)
Shadow Tactics – Aiko’s Choice (Epic Games Store)

Make this the ultimate weekend by playing these titles or any of the other 1,500 games in the GeForce NOW library. What game will you stream first on your Ultimate membership? Let us know in the comments or on Twitter.

Are you ready for the Ultimate cloud gaming experience?

— NVIDIA GeForce NOW (@NVIDIAGFN) January 18, 2023

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI

For insights into the future of generative AI, check out the latest episode of the NVIDIA AI Podcast. Host Noah Kravitz is joined by Pat Grady and Sonya Huang, partners at Sequoia Capital, to discuss their recent essay, “Generative AI: A Creative New World.”

The authors delve into the potential of generative AI to enable new forms of creativity and expression, as well as the challenges and ethical considerations of this technology.

Grady and Huang emphasize the potential of generative AI to revolutionize industries such as art, design and media by allowing for the creation of unique, personalized content on a scale that would be impossible for humans to achieve alone.

They also address the importance of considering the ethical implications of the technology, including the potential for biased or harmful outputs and the need for responsible use and regulation.

Listen to the full episode to hear more about the possibilities of generative AI and the considerations to be made as this technology moves forward.

The AI Podcast · Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI – Ep. 187

You Might Also Like

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint

Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through Apple Music, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Artist Zhelong Xu Brings Chinese Zodiac to Life for Lunar New Year This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

To celebrate the upcoming Lunar New Year holiday, NVIDIA artist Zhelong Xu, aka Uncle Light, brought Chinese zodiac signs to life this week In the NVIDIA Studio — modernizing the ancient mythology in his signature style.

Chinese Tradition Brought to the 21st Century

NVIDIA principal artist Zhelong Xu is also co-founder of the Shanghai Magicstone Images studio and an art consultant for the Tencent TiMi Studio Group.

Xu is deeply passionate about modeling Chinese zodiac signs in 3D. His first serious attempt, Carefree Sheep, was chosen by Adobe Substance Painter, previously Allegorithmic, as the artwork on its first software launch screen.

‘Carefree Sheep’ by 3D artist Zhelong Xu.

Xu creates at least one piece for his zodiac series each year. Harboring a Cute Tiger is his most popular work, which reached over 16 million people on the Chinese social media app Weibo.

‘Harboring a Cute Tiger’ by Zhelong Xu.

“I had the idea to turn this series into ceramic works, so I continued working with my friends in Jingdezhen to turn this series into physical objects,” he said.

Zodiac piece for the Year of the Rabbit.

“I wanted to do something different in the Year of the Rabbit, so I chose to color the rabbit in NVIDIA green to match the classical Chinese atmosphere and to bring out the Chinese New Year energy,” said Xu, who joined NVIDIA last year.

The two emerald rabbits, one with its ears up and the other with them down, are designed to look like they’re teeming with anticipation for the arrival of Lunar New Year.

Xu deployed ZBrush for initial modeling with its custom sculpting tools. He then UV mapped the 3D model in preparation for applying a special emerald texture made in Adobe Substance 3D Painter. NVIDIA RTX-accelerated light- and ambient-occlusion features baked and optimized the scene assets in mere seconds, letting Xu experiment with textures quickly and easily with his GeForce RTX 4090 GPU.

Lighting adjustments in Blender.

The artist quickly exported files to Blender to set up the environment and tinker with lighting. He added many Eastern-style architectural and furniture options from the PBRMAX.com asset library.

High-quality 3D assets gathered from the PBRMAX.com asset library.

Movement within the viewport was seamless with Blender Cycles RTX-accelerated OptiX ray tracing for interactive, photorealistic modeling.

Xu then deployed his secret weapon: NVIDIA Omniverse, a platform for creating and operating metaverse applications. He saved files in Universal Scene Description (USD) format using the Omniverse export plug-in to import them into the NVIDIA Omniverse Create app for final modeling. Here, Xu made adjustments to the translucent emerald material to make it as realistic as possible.

USD format enables import into Omniverse Create.

Omniverse Create was incredibly useful for scene modifications, Xu said, as it enabled him to test lighting with his scene rendering in real time. This provided him with the most accurate iteration of final renders, allowing for more meaningful real-time edits.

“Thanks to the power of the GeForce RTX 4090 GPU and RTX optimization in Omniverse, I got the desired effect very quickly and tested a variety of lighting effects,” he said.

Final environmental edits in Omniverse Create.

Omniverse gives 3D artists their choice of renderer within the viewport, with support for Pixar HD Storm, Chaos V-Ray, Maxon’s Redshift, OTOY Octane, Blender Cycles and more. Xu deployed the unbiased NVIDIA Iray renderer to complete the project.

3D artist Zhelong Xu.

View more of Xu’s work on ArtStation.

#NewYearNewArt Challenge 

With a new year will come new art, and we’d love to see yours! Use the hashtag #NewYearNewArt and tag @NVIDIAStudio to show off recent creations for a chance to be featured on our channels.

The challenge is off to a great start:

A new year means new art!

Join our Jan-Feb #NewYearNewArt challenge by sharing any new or relatively new art you’ve created for a chance to be featured on our channels!

Be sure to tag #NewYearNewArt and thanks to @AOi__Pan for sharing their new art. pic.twitter.com/lXiFLROhQh

— NVIDIA Studio (@NVIDIAStudio) January 10, 2023

Excellent artists like @rabbit.hole_renders have helped kick off the challenge with creativity that’s taking people to new worlds.  

Who’s playing this game?

Awesome submission to the #NewYearNewArt Challenge by rabbit.hole_renders (IG).

Share any new or relatively new art you’ve created with the tag #NewYearNewArt for a chance to be featured on our channels! pic.twitter.com/GyrdJPbcM5

— NVIDIA Studio (@NVIDIAStudio) January 14, 2023

Plus, get a dose of potassium with @graffitpl’s banana-based animation that comes with a side of mushrooms.

Were you looking for m̷o̷r̷e̷ spore banana animations? Look no further!

This dose of potassium comes from the brililant @graffitpl (IG).

Tag #NewYearNewArt in your recently created art for a chance to be featured until the end of February! pic.twitter.com/78ETFDtVH4

— NVIDIA Studio (@NVIDIAStudio) January 16, 2023

Keep your eyes peeled for more amazing submissions on the NVIDIA Studio Instagram stories.

Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.