Deep Learning Digs Deep: AI Unveils New Large-Scale Images in Peruvian Desert

Researchers at Yamagata University in Japan have harnessed AI to uncover four previously unseen geoglyphs — images on the ground, some as wide as 1,200 feet, made using the land’s elements — in Nazca, a seven-hour drive south of Lima, Peru.

The geoglyphs — a humanoid, a pair of legs, a fish and a bird — were revealed using a deep learning model, making the discovery process significantly faster than traditional archaeological methods.

The team’s deep learning model training was executed on an IBM Power Systems server with an NVIDIA GPU.

Using open-source deep learning software, the researchers analyzed high-resolution aerial photographs, a technique that was part of a study that began in November 2019.

Published this month in the Journal of Archaeological Science, the study confirms the deep learning model’s findings through onsite surveys and highlights the potential of AI in accelerating archaeological discoveries.

The deep learning techniques that comprise the hallmark of modern AI are used for various archeological efforts, whether analyzing ancient scrolls discovered across the Mediterranean or categorizing pottery sherds from the American Southwest.

The Nazca lines, a series of ancient geoglyphs that date from 500 B.C. to 500 A.D. — primarily likely from 100 B.C. to 300 A.D. — were created by removing darker stones on the desert floor to reveal lighter-colored sand beneath.

The drawings — depicting animals, plants, geometric shapes and more — are thought to have had religious or astronomical significance to the Nazca people who created them.

The discovery of these new geoglyphs indicates the possibility of more undiscovered sites in the area.

And it underscores how technology like deep learning can enhance archaeological exploration, providing a more efficient approach to uncovering hidden archaeological sites.

Read the full paper.

Featured image courtesy of Wikimedia Commons.

Scientists Improve Delirium Detection Using AI and Rapid-Response EEGs

Detecting delirium isn’t easy, but it can have a big payoff: speeding essential care to patients, leading to quicker and surer recovery.

Improved detection also reduces the need for long-term skilled care, enhancing the quality of life for patients while decreasing a major financial burden. In the U.S., caring for those suffering from delirium costs up to $64,000 a year per patient, according to the National Institutes of Health.

In a paper published last month in Nature, researchers describe how they used a deep learning model called Vision Transformer, accelerated by NVIDIA GPUs, alongside a rapid-response electroencephalogram, or EEG, device to detect delirium in critically ill older adults.

The paper, called “Supervised deep learning with vision transformer predicts delirium using limited lead EEG,” is authored by Malissa Mulkey of the University of South Carolina, Huyunting Huang of Purdue University, Thomas Albanese and Sunghan Kim of the University of East Carolina, and Baijian Yang of Purdue.

Their innovative approach achieved a testing accuracy rate of 97%, promising a potential breakthrough in forecasting dementia. And by harnessing AI and EEGs, the researchers could objectively evaluate prevention and treatment methods, leading to better care.

This impressive result is due in part to the accelerated performance of NVIDIA GPUs, enabling the researchers to accomplish their tasks in half the time compared to CPUs.

Delirium affects up to 80% of critically ill patients. Yet conventional clinical detection methods identify fewer than 40% of cases — representing a significant gap in patient care. Presently, screening ICU patients involves a subjective bedside assessment.

The introduction of handheld EEG devices could make screening more accurate and affordable, but the lack of skilled technicians and neurologists poses a challenge.

The use of AI, however, can eliminate the need for a neurologist to interpret findings and allow for the detection of changes associated with delirium roughly two days before symptom onset, when patients are more receptive to treatment. It also makes it possible to use EEGs with minimal training.

The researchers applied an AI model called ViT, initially created for natural language processing and accelerated by NVIDIA GPUs, to EEG data — offering a fresh approach to data interpretation.

The use of a handheld rapid-response EEG device, which doesn’t require large EEG machines or specialized technicians, was another noteworthy study finding.

This practical tool, combined with advanced AI models for interpreting the data they collect, could streamline delirium screenings in critical care units.

The research presents a promising method for delirium detection that could shorten hospital stays, increase discharge rates, decrease mortality rates and reduce the financial burden associated with delirium.

By integrating the power of NVIDIA GPUs with innovative deep learning models and practical medical devices, this study underlines the transformative potential of technology in enhancing patient care.

As AI grows and develops, medical professionals are increasingly likely to rely on it to forecast conditions like dementia and intervene early, revolutionizing the future of critical care.

Read the full paper.

A Golden Age: ‘Age of Empires III’ Joins GeForce NOW

Conquer the lands in Microsoft’s award-winning Age of Empires III: Definitive Edition. It leads 10 new games supported today on GeForce NOW.

At Your Command

Stream battles all from the cloud.

Age of Empires III: Definitive Edition is a remaster of one of the most beloved real-time strategy franchises featuring improved visuals, enhanced gameplay, cross-platform multiplayer and more. Command mighty civilizations from across Europe and the Americas or jump to the battlefields of Asia. Members can experience two new game modes: Historical Battles and The Art of War Challenge Missions. Two new nations also join this edition — Sweden and the Inca — each with advantages for conquering the New World.

Build an empire today and stream across devices in glorious 4K resolution with an Ultimate membership.

Conquer Your Games List

Master the art of siege tactics in “Conqueror’s Blade” this week.

The GeForce NOW library is always expanding. Take a look at the 10 newly supported games this week.

Aliens: Dark Descent (New release on Steam, June 20)
Trepang2 (New release on Steam, June 21)
Forever Skies (New release on Steam, June 22)
Age of Empires III: Definitive Edition (Steam)
A.V.A Global (Steam)
Bloons TD 6 (Steam)
Conqueror’s Blade (Steam)
Layers of Fear (Steam)
Park Beyond (Steam)
Tom Clancy’s Rainbow Six Extraction (Steam)

Before diving into the weekend, let us know your answer to our question of the week on Twitter or in the comments below. Happy streaming!

You’ve been chosen to build the greatest empire in history.

What time period are you choosing to build it in?

— NVIDIA GeForce NOW (@NVIDIAGFN) June 14, 2023

Shell-e-brate Good Times in 3D With ‘Kingsletter’ This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Amir Anbarestani, an accomplished 3D artist who goes by the moniker Kingsletter, had a “shell of a good time” creating his Space Turtle scene this week In the NVIDIA Studio.

Kingsletter has always harbored a fascination with 3D art, he said. As a child, he often enjoyed exploring and crafting within immersive environments. Whether it was playing with plasticine — putty-like modeling material — or creating pencil drawings, his innate inclination for self-expression always found resonance within the expansive domain of 3D.

Space Turtle with MSI creator Z17HX courtesy of @AustraliaMSI & @NVIDIAStudio

Unleash your creativity with NVIDIA Studio drivers!

Get yours at: https://t.co/idJlWgb8UX pic.twitter.com/Ff6Y6RfQp4

— King’s Letter (@TheKingsletter) April 28, 2023

Below, he shares his inspiration and creative process using ZBrush, Adobe Substance 3D Painter and Blender.

An NVIDIA DLSS 3 plug-in is now available in Unreal Engine 5, offering select benefits including AI upscaling for high frame rates, super resolution and more for GeForce RTX 40 Series owners.

And 3D creative app Marvelous Designer launches Into the Omniverse its NVIDIA Omniverse Connector this month. Learn how talented artists are using the Connector, along with the Universal Scene Description (“OpenUSD”) framework, to elevate their creative workflows.

NVIDIA DLSS 3 Plug-In Is Unreal — Engine 5

NVIDIA Studio released a DLSS 3 plug-in compatible with Unreal Engine 5. The Play in Editor tool is useful for game developers to quickly review gameplay in a level while editing — and DLSS 3 AI upscaling will unlock significantly higher frame rates on GeForce RTX 40 Series GPUs for even smoother previewing.

NVIDIA DLSS 3 plug-in unlocks incredible visual details with DLSS 3 in Unreal Engine 5.

Plus, select Unreal Engine viewports offer DLSS 2 Super Resolution and upscaling benefits in typical content-creation workflows like modeling, lighting, animation and more.

Download DLSS 3 for Unreal Engine 5.2, available now. Learn more about NVIDIA technologies supported by Unreal Engine 5.

Turtle Recall 

The process began with sketching and initial sculpting in the ZBrush tool, where the concept of a floating turtle in space took shape and evolved into a dynamic shot of the creature soaring toward the camera.

“It’s remarkable how something as simple as shaping an idea’s basic form can be so immensely gratifying,” said Kingsletter on the blockout phase. “There’s a unique joy in starting with a blank canvas and gradually bringing the essence of a concept to life.”

Sketching and initial sculpting in ZBrush.

After finalizing the model in ZBrush, Kingsletter used ZRemesher to retopologize it, or generate a low-poly version suitable for the intended scene. This is useful for removing artifacts and other mesh issues before animation and rigging.

“NVIDIA graphics cards are industry leading in the creative community. I don’t think I know anyone that uses other GPUs.” — Kingsletter

The RIZOMUV UV mapping 3D software was then deployed for unwrapping the model, the process of opening a mesh to make a 2D texture that covers a 3D object. This is effective for adding textures to objects with precision, a common need for professional artists.

Next, Kingsletter applied surface details, from subtle dusting to extreme wear and tear, with materials mimicking real-world behaviors such as sheen, subsurface scattering and more in Adobe Substance 3D Painter. RTX-accelerated light and ambient occlusion enabled fully baked models in mere seconds.

Textures added and baked rapidly in Adobe Substance 3D Painter.

Kingsletter then moved to Blender to animate the scene, setting up simple rigs and curves to bring the turtle’s flapping limbs and flight to life. Harnessing the potential of his MSI Creator Z17 HX Studio A13V NVIDIA Studio laptop from MSI with GeForce RTX 4070 graphics turtle-ly exceeded the artist’s lofty expectations.

The MSI Creator Z17 HX Studio laptop with GeForce RTX 4070 graphics.

“As a digital creative professional, I always strive to work with the best creative tools available,” Kingsletter said. “Choosing the MSI Creator laptop allowed me to exceed my creative professional needs and indulge in my passionate gaming hobby.”

He enriched the cosmic environment using Blender’s particle system, which scattered random debris, asteroids and a small, rotating planet throughout the outer-space scene. AI-powered RTX-accelerated OptiX ray tracing in the viewport unlocked buttery-smooth interactive animations in the viewport.

Create magnificent worlds in Blender accelerated by GeForce RTX graphics.

“Simulating smoke proved to be the most challenging aspect,” said Kingsletter about his first foray into this form of animation. “Through numerous trials and errors, I persevered until I achieved a truly satisfactory result.”

Realistic smoke elevated the 3D animation.

His RTX 4070 GPU facilitated smoother, more efficient rendering of the final visuals with RTX-accelerated OptiX ray tracing in Blender Cycles, ensuring the fastest final frame render.

When asked what he’d advise his younger artist self, Kingsletter said, “I’d enhance my observation skills. By immersing myself in the intricacies of form and paying careful attention to the world around me, I would have laid a stronger foundation for my creative journey.”

Wise words for all creators.

Digital 3D artist Kingsletter.

Check out Kingsletter’s beautiful 3D creations on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Into the Omniverse: Universal Scene Description Support for Marvelous Designer Lets Users Tailor Digital Assets, Clothes for 3D Characters

Editor’s note: This post is part of Into the Omniverse, a monthly series focused on how artists, developers and enterprises can transform their workflows using the latest advances in Universal Scene Description and NVIDIA Omniverse.

Whether animating fish fins or fashioning chic outfits for digital characters, creators can tap Marvelous Designer software to compose and tailor assets, clothes and other materials for their 3D workflows.

Marvelous Designer recently launched an Omniverse Connector, a tool that enhances collaborative workflows that take place between its software and NVIDIA Omniverse, a development platform for connecting and building 3D tools and applications.

The Connector enables users to significantly speed and ease their design processes, thanks to its support for the Universal Scene Description framework, known as OpenUSD, which serves as a common language between 3D tools.

In a typical computer graphics pipeline, an artist needs to go back and forth between software in finalizing their work. The new Omniverse Connector enables creators to save time with Marvelous Designer’s improved import and export capabilities through OpenUSD.

In a recent livestream, 3D designer Brandon Yu shared how he’s using the new Connector and OpenUSD to improve his collaborative workflow, enhance productivity, expand creative possibilities and streamline his design process.

Mike Shawbrook, who has more than 150,000 subscribers on his MH Tutorials YouTube channel, walks through using the new Connector in the tutorial below. Shawbrook demonstrates how he set up a live session between Marvelous Designer and Omniverse to create a simple cloth blanket.

For more, check out this tutorial on using the new Connector and see how OpenUSD can improve 3D workflows:

Improved USD Compatibility

With the Marvelous Designer Omniverse Connector, users can harness the real-time rendering capabilities of Omniverse to visualize their garments in an interactive environment. This integration empowers creators to make informed design decisions, preview garments’ reactions to different lighting conditions and simulate realistic fabric behavior in real time.

The Connector’s expanded support for OpenUSD enables seamless interchange of 3D data between creative applications.

In the graphic above, an artist uses the new connector to adjust 3D-animated fish fins, a key digital material in an underwater scene.

Get Plugged Into the Omniverse 

To learn more about how OpenUSD can improve 3D workflows, check out a new video series on the file framework. The first installment covers four OpenUSD “superpowers.”

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools.

Share your Marvelous Designer and Omniverse creations to the Omniverse gallery for a chance to be featured on NVIDIA social media channels.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of Marvelous Designer.

NVIDIA CEO: Creators Will Be “Supercharged” by Generative AI

Generative AI will “supercharge” creators across industries and content types, NVIDIA founder and CEO Jensen Huang said today at the Cannes Lions Festival, on the French Riviera.

“For the very first time, the creative process can be amplified in content generation, and the content generation could be in any modality — it could be be text, images, 3D, videos,” Huang said in a conversation with Mark Read, CEO of WPP — the world’s largest marketing and communications services company.

Huang and Read backstage at Cannes Lions

At the event attended by thousands of creators, marketers and brand execs from around the world, Huang outlined the impact of AI on the $700 billion digital advertising industry. He also touched on the ways AI can enhance creators’ abilities, as well as the importance of responsible AI development.

“You can do content generation at scale, but infinite content doesn’t imply infinite creativity,” he said. “Through our thoughts, we have to direct this AI to generate content that has to be aligned to your values and your brand tone.”

The discussion followed Huang’s recent keynote at COMPUTEX, where NVIDIA and WPP announced a collaboration to develop a content engine powered by generative AI and the NVIDIA Omniverse platform for building and operating metaverse applications.

Driving Forces of the Generative AI Era

NVIDIA has been pushing the boundaries of graphics technology for 30 years and been at the forefront of the AI revolution for a decade. This combination of expertise in graphics and AI uniquely positions the company to enable the new era of generative AI applications.

Huang said that “the biggest moment of modern AI” can be traced back to an academic contest in 2012, when a team of University of Toronto researchers led by Alex Krizhevsky showed that NVIDIA GPUs could train an AI model that recognized objects better than any computer vision algorithm that came before it.

Since then, developers have taught neural networks to recognize images, videos, speech, protein structures, physics and more.

“You could learn the language of almost anything,” Huang said. “Once you learn the language, you can apply the language — and the application of language is generation.”

Generative AI models can create text, pixels, 3D objects and realistic motion, giving professionals superpowers to more quickly bring their ideas to life. Like a creative director working with a team of artists, users can direct AI models with prompts, and fine-tune the output to align with their vision.

“You have to give the machine feedback like the best creative director,” Read said.

These tools aren’t a replacement for human creativity, Huang emphasized. They augment the skills of artists and marketing professionals to help them feed demand from clients by producing content more quickly and in multiple forms tailored to different audiences.

“We will democratize content generation,” Huang said.

Reimagining How We Live, Work and Create With AI

Generative AI’s key benefit for the creative industry is its ability to scale up content generation, rapidly generating options for text and visuals that can be used in advertising, marketing and film.

“In the old days, you’d create hundreds of different ad options that are retrieved based on the medium,” Huang said. “In the future, you won’t retrieve — you’ll generate billions of different ads. But every single one of them has to be tone appropriate, has to be brand perfect.”

For use by professional creators, these AI tools must also produce high-quality visuals that meet or exceed the standard of content captured through traditional methods.

It all starts with a digital twin, a true-to-reality simulation of a real-world physical asset. The NVIDIA Omniverse platform enables the creation of stunning, photorealistic visuals that accurately represent physics and materials — whether for images, videos, 3D objects or immersive virtual worlds.

“Omniverse is a virtual world,” Huang said. “We created a virtual world where AI could learn how to create an AI that’s physically based and grounded by physics.”  

“This virtual world has the ability to ingest assets and content that’s created by any tool, because we have this interface called USD,” he said, referring to the Universal Scene Description framework for collaborating in 3D. With it, artists and designers can combine assets developed using popular tools from companies like Adobe and Autodesk with virtual worlds developed using generative AI.

NVIDIA Picasso, a foundry for custom generative AI models for visual design unveiled earlier this year, also supports best-in-class image, video and 3D generative AI capabilities developed in collaboration with partners including Adobe, Getty Images and Shutterstock.

“We created a platform that makes it possible for our partners to train from data that was licensed properly from, for example, Getty, Shutterstock, Adobe,” Huang said. “They’re respectful of the content owners. The training data comes from that source, and whatever economic benefits come from that could accrete back to the creators.”

Like any groundbreaking technology, it’s critical that AI is developed and deployed thoughtfully, Read and Huang said. Technology to watermark AI-generated assets and to detect whether a digital asset was modified or counterfeited will support these goals.

“We have to put as much energy into the capabilities of AI as we do the safety of AI,” Huang said. “In the world of advertising, safety is brand alignment, brand integrity, appropriate tone and truth.”

Collaborating on Content Engine for Digital Advertising

As a leader in digital advertising, WPP is embracing AI as a tool to boost creativity and personalization, helping creators across the industry craft compelling messages that reach the right consumer.

“From the creative process to the customer, there’s going to have to be ad agencies in the middle that understand the technology,” Huang said. “That entire process in the middle requires humans in the loop. You have to understand the voice of the brand you’re trying to represent.”

Using Omniverse Cloud, WPP’s creative professionals can build physically accurate digital twins of products using a brand’s specific product-design data. This real-world data can be combined with AI-generated objects and digital environments — licensed through partners such as Adobe and Getty Images — to create virtual sets for marketing content.

“WPP is going to unquestionably become an AI company,” Huang said. “You’ll create an AI factory where the input is creativity, thoughts and prompts, and what comes out of it is content.”

Enhanced by responsibly trained, NVIDIA-accelerated generative AI, this content engine will boost creative teams’ speed and efficiency, helping them quickly render brand-accurate advertising content at scale.

“The type of content you’ll be able to help your clients generate will be practically infinite,” Huang said. “From the days of hundreds of examples of content that you create for a particular brand or for a particular campaign, it’s going to eventually become billions of generated content for every individual.”

Learn more about NVIDIA’s collaboration with WPP.

NVIDIA Research Wins Autonomous Driving Challenge, Innovation Award at CVPR

NVIDIA will be showcased next week as the winner of the fiercely contested 3D Occupancy Prediction Challenge for autonomous driving development at the Computer Vision and Pattern Recognition Conference (CVPR), in Vancouver, Canada.

The competition had more than 400 submissions from nearly 150 teams across 10 regions.

3D occupancy prediction is the process of forecasting the status of each voxel in a scene, that is, each data point on a 3D bird’s-eye-view grid. Voxels can be identified as free, occupied or unknown.

Critical to the development of safe and robust self-driving systems, 3D occupancy grid prediction provides information to autonomous vehicle (AV) planning and control stacks using state-of-the-art convolutional neural networks and transformer models, which are enabled by the NVIDIA DRIVE platform.

“NVIDIA’s winning solution features two important AV advancements,” said Zhiding Yu, senior research scientist for learning and perception at NVIDIA. “It demonstrates a state-of-the-art model design that yields excellent bird’s-eye-view perception. It also shows the effectiveness of visual foundation models with up to 1 billion parameters and large-scale pretraining in 3D occupancy prediction.”

Perception for autonomous driving has evolved over the past years from handling 2D tasks, such as detecting objects or free spaces in images, to reasoning about the world in 3D with multiple input images.

This now provides a flexible and precise fine-grained representation of objects in complex traffic scenes, which is “critical for achieving the safety perception requirements for autonomous driving,” according to Jose Alvarez, director of AV applied research and distinguished scientist at NVIDIA.

Yu will present the NVIDIA Research team’s award-winning work at CVPR’s End-to-End Autonomous Driving Workshop on Sunday, June 18, at 10:20 a.m. PT, as well as at the Vision-Centric Autonomous Driving Workshop on Monday, June 19, at 4:00 p.m. PT.

In addition to winning first place in the challenge, NVIDIA will receive at the event an Innovation Award, recognizing its “fresh insights into the development of view transformation modules,” with “substantially improved performance” compared to previous approaches, according to the CVPR workshop committee.

Read NVIDIA’s technical report on the submission.

Safer Vehicles With 3D Occupancy Prediction

While traditional 3D object detection — detecting and representing objects in a scene, often using 3D bounding boxes — is a core task in AV perception, it has its limitations. For example, it lacks expressiveness, meaning the bounding boxes might not represent enough real-world information. It also requires defining taxonomies and ground truths for all possible objects, even ones rarely seen in the real world, such as road hazards that may have fallen off a truck.

In contrast, 3D occupancy prediction provides rich information about the world to a self-driving vehicle’s planning stack, which is necessary for end-to-end autonomous driving.

Software-defined vehicles can be continuously upgraded with new developments that are proven and validated over time. State-of-the-art software updates that evolve from research initiatives, such as the ones recognized at CVPR, are enabling new features and safer driving capabilities.

The NVIDIA DRIVE platform offers a path to production for automakers, providing full-stack hardware and software for safe and secure AV development, from the car to the data center.

More on the CVPR Challenge

The 3D Occupancy Prediction Challenge at CVPR required participants to develop algorithms that solely used camera input during inference. Participants could use open-source datasets and models, facilitating the exploration of data-driven algorithms and large-scale models. The organizers provided a baseline sandbox for the latest state-of-the-art 3D occupancy prediction algorithms in real-world scenarios.

NVIDIA at CVPR

NVIDIA is presenting nearly 30 papers and presentations at CVPR. Experts who’ll discuss autonomous driving include:

Jose Alvarez on emerging challenges for 3D perception in AVs during the End-to-End Autonomous Driving Workshop: Emerging Tasks and Challenges Workshop; and on optimizing large deep models for real-time inference at the Embedded Vision Workshop.
Nikolai Smolyanskiy, director of deep learning at NVIDIA, on real-time traffic prediction for AVs during the End-to-End Autonomous Driving Workshop: Perception, Prediction, Planning and Simulation.
Robin Jenkin, distinguished engineer at NVIDIA, on image quality in fisheye cameras at the OmniCV Workshop, held in conjunction with CVPR.
Xinshuo Weng, research scientist for AV research at NVIDIA, on vision solutions for autonomous driving during the Vision-Centric Autonomous Driving Workshop.

View other talks on the agenda and learn more about NVIDIA at CVPR, which runs June 18-22.

Featured image courtesy of OccNet and Occ3D.

test test scheduled post

test test scheduled post

Do Pass Go, Do Collect More Games: Xbox Game Pass Coming to GeForce NOW

Xbox Game Pass support is coming to GeForce NOW.

Members will soon be able to play supported PC games from the Xbox Game Pass catalog through NVIDIA’s cloud gaming servers. Learn more about how support for Game Pass and Microsoft Store will roll out in the coming months.

Plus, Age of Empires IV: Anniversary Edition is the first from the world’s most popular real-time strategy franchise to arrive on GeForce NOW.

A Game Pass-tic Partnership

Announced over the weekend, Game Pass members will soon be able to play supported PC games from the Game Pass catalog with GeForce NOW.

Thrilled to share that in the coming months you’ll be able to play your @XboxGamePassPC games through NVIDIA GeForce NOW. Can’t wait for you to jump in! https://t.co/jZXkjHZUrf

— BondSarahBond (@BondSarah_Bond) June 12, 2023

We’re working closely with Microsoft to enable members to play select PC titles from Microsoft Store, just as they can today on GeForce NOW with their Steam, Epic Games Store, Ubisoft Connect and GOG.com accounts. Members who are subscribed to PC Game Pass or Xbox Game Pass Ultimate will be able to stream these select PC titles from the Game Pass library — without downloads or additional purchases for instant gaming from the cloud.

With hundreds of PC titles available in the Game Pass catalog, Xbox and PC gamers together can look forward to future GFN Thursdays to see what’s next. PC games from Xbox Game Studios and Bethesda on Steam and Epic Games Store will continue to be released, giving members more ways to play their favorite Xbox titles.

And with the ability for GeForce NOW members to stream at high performance across devices, including PCs, Macs, mobile devices, smart TVs, gaming handheld devices and more, gamers everywhere will be able to take their Xbox PC games wherever they go, along with the over 1,600 titles in the GeForce NOW library.

For an even more upgraded experience, upgrade to Ultimate and Priority memberships to skip the waiting lines over free members and get into gaming even faster.

Build Your Empire — and Library

Siege the moment!

Conquer the lands in Microsoft’s award-winning Age of Empires franchise this week.

Age of Empires IV: Anniversary Edition takes the world’s most popular real-time strategy game to the next level with familiar and new ways for players to expand their empire. The Anniversary Edition brings all the latest updates, including new civilizations — the Ottomans and Malians — maps, languages, challenges and more. Choose the path to greatness and become a part of history through Campaign Story Mode with a tutorial designed for first-time players, or challenge the world in competitive or cooperative online matches that include ranked seasons.

Ultimate members can rule the kingdom in stunning 4K or ultrawide resolutions, and settle in with up to eight-hour streaming sessions.

What to Play This Week

Hand-painted nostalgia in the cloud this summer.

Take a look at the two new games available to stream this week:

Dordogne (New release on Steam)
Age of Empires IV: Anniversary Edition (Steam)

Before the weekend arrives, check out our question of the week. Let us know your answer on Twitter or in the comments below.

You’ve been chosen to build the greatest empire in history.

What time period are you choosing to build it in?

— NVIDIA GeForce NOW (@NVIDIAGFN) June 14, 2023

Forged in Flames: Startup Fuses Generative AI, Computer Vision to Fight Wildfires

When California skies turned orange in the wake of devastating wildfires, a startup fused computer vision and generative AI to fight back.

“With the 2020 wildfires, it became very personal, so we asked fire officials how we could help,” said Emrah Gultekin, the Turkish-born CEO of Chooch, a Silicon Valley-based leader in computer vision.

California utilities and fire services, they learned, were swamped with as many as 2,000 false positives a week from an existing wildfire detection system. The wrong predictions came from fog, rain and smudges on the lenses of a network of cameras they used.

So, in a pilot project, Chooch linked its fire detection software to the camera network. It analyzed snapshots every 15 minutes, seeking signs of smoke or fire.

Generative AI Sharpens Computer Vision

Then, the team led by Hakan Gultekin — Emrah’s brother, a software wiz and Chooch’s CTO — had an idea.

They built a generative AI tool that automatically created descriptions of each image, helping reviewers discern when smoke is present. False positives dropped from 2,000 a week to eight.

Chooch detects smoke and fire despite bad weather or dirty camera lenses.

“Fire chiefs were excited about launching the technology in their monitoring centers and what it could achieve,” said Michael Liou, the president of Chooch, who detailed the project in a recent webinar.

Chooch’s generative AI tool gives fire fighters in California’s Kern County a dashboard on their smartphones and PCs, populated in real time with alerts, so they can detect wildfires fast.

In 2020, California experienced 9,900 wildfires that burned 4.3 million acres of forest and caused $19 billion in losses. Stopping one fire from spreading out of control would pay for the wildfire detection system for 50 years, the company estimates.

A Vision for Gen AI

Chooch’s CEO says it’s also the shape of things to come.

Emrah Gultekin

“The fusion of large language models and computer vision will bring about even more powerful and accurate products that are easier to deploy,” said Gultekin.

For example, utilities can connect the software to drones and fixed cameras to detect corrosion on capacitors or vegetation encroaching on power lines.

The technology could see further validation as Chooch enters an $11 million Xprize challenge on detecting and fighting wildfires. Sponsors include PG&E and Lockheed Martin that’s building an AI lab to predict and respond to wildfires in a separate collaboration with NVIDIA.

Dashboards for PCs and smartphones can update firefighters with real-time alerts from Chooch’s software.

Chooch applies its technology to a host of challenges in manufacturing, retail and security.

For example, one manufacturer uses Chooch’s models to detect defects before products ship. Eliminating just 20% of the faults will pay for the system several times over.

Inception of a Partnership

Back in 2019, a potential customer in the U.S. government asked for support with edge deployments it planned on NVIDIA GPUs. Chooch joined NVIDIA Inception, a free program that nurtures cutting-edge startups.

Using NGC, NVIDIA’s hub for accelerated software, Hakan was able to port Chooch’s code to NVIDIA GPUs over a weekend. Now its products run on NVIDIA Jetson modules and “have been tested in the wild with full-motion video and multispectral data,” Emrah said.

Since then, the company rolled out support for GPUs in data centers and beyond. For example, the wildfire use case runs on NVIDIA A100 Tensor Core GPUs in the cloud.

Along the way, Chooch embraced software like Triton Inference Server and the NVIDIA DeepStream software development kit.

“The combination of DeepStream and Triton increased our capacity 8x to run more video streams on more AI models — that’s a huge win,” Emrah said.

A Wide Horizon

Now Chooch is expanding its horizons.

The company is a member of the partner ecosystems for NVIDIA Metropolis for intelligent video analytics and NVIDIA Clara Guardian, edge AI software for smart hospitals. Chooch also works with NVIDIA’s retail and telco teams.

The software is opening new doors and expanding the use cases it can address.

“It’s hard work because there’s so much uncharted territory, but that’s also what makes it exciting,” Emrah said.

Learn more about generative AI for enterprises, and explore NVIDIA’s solutions for power grid modernization.