Survive the Quarantine Zone and More With Devolver Digital Games on GeForce NOW

NVIDIA kicked off the year at CES, where the crowd buzzed about the latest gaming announcements — including the native GeForce NOW app for Linux and Amazon Fire TV sticks, community-requested flight-control support and a stacked AAA lineup for the year.

Check out what the creators of the YouTube channels Cloud Gaming Battle and Anytime Anywhere Gaming had to say in their latest videos exploring GeForce NOW at CES:

Plus, read what the community said about the upcoming native GeForce NOW app on Linux.

The energy continues this GFN Thursday with the launch of Quarantine Zone: The Last Check, Devolver Digital’s most haunting and atmospheric release yet.

Check it out along with other top Devolver Digital games on GeForce NOW and nine new games in the cloud this week.

And flight sim fans can soar even higher with a Thrustmaster giveaway, featuring the T.Flight Hotas One MSFS Edition and one month of a GeForce NOW Ultimate membership for free — follow Thrustmaster and GeForce NOW, and repost the giveaway post for a chance to win before Jan. 24.

Welcome to the Zone

Quarantine Zone: The Last Check on GeForce NOW
Trust no one, file everything.

Quarantine Zone: The Last Check drops into the cloud this week, bringing the eerie isolation and grim decision-making of its Steam debut to any device powered by GeForce NOW.

Armed with advanced screening tools and instincts, players are put in charge of a critical checkpoint during a zombie outbreak — where one wrong choice could let the infection slip past the post. Screen survivors, manage scarce resources and hold back the undead. Holding the line is only the beginning — reinforce defenses and keep the camp alive as chaos intensifies with each passing day.

Quarantine Zone: The Last Check streams at full GeForce RTX 5080 power for GeForce NOW Ultimate members, bringing razor-sharp visuals and ultrasmooth performance to every tense border crossing. Every document check, difficult decision and late-night shift in the guard booth hits harder with maxed-out settings and buttery frame rates, all without long installs or hardware worries getting in the way of the paranoia.

No trip through the Devolver catalog would be complete without revisiting its eclectic, chaotic hits already streaming on GeForce NOW. Check out Cult of the Lamb, Ball x Pit, Hotline Miami, Hotline Miami 2: Wrong Number, Inscryption and Enter the Gungeon — where every dodge roll matters and every chest hides mayhem.

Each of these titles showcases Devolver’s knack for daring design and unforgettable worlds — now all one click away through GeForce NOW.

Time to Play

Styx: Shards of Darkness
Sink into the shadows and steal everything that isn’t nailed down.

Styx: Shards of Darkness sneaks back in with a signature goblin grin and a dagger never too far from trouble. Published by Focus Home Interactive, this stealth adventure leans into vertical level design, tight shadows and sarcastic commentary from gaming’s most lovable little assassin. Expect big, open environments to pick apart, a toolkit full of invisibility tricks, traps and clones, and plenty of chances to turn a quiet infiltration into chaotic improv when plans go sideways.

In addition, members can look for the following:

For Game Pass titles on Ubisoft Connect, reference this article on how to log in.

What are you planning to play this weekend? Let us know on X or in the comments below.

CEOs of NVIDIA and Lilly Share ‘Blueprint for What Is Possible’ in AI and Drug Discovery

NVIDIA and Lilly are putting together “a blueprint for what is possible in the future of drug discovery,” NVIDIA founder and CEO Jensen Huang told attendees at a fireside chat Monday with Dave Ricks, chair and CEO of Lilly.

The conversation — which took place during the annual J.P. Morgan Healthcare Conference in San Francisco — focused on the announcement of a first-of-its-kind AI co-innovation lab by NVIDIA and Lilly.

“We’re systematically bringing together some of the brightest minds in the field of drug discovery and some of the brightest minds in computer science,” Huang said. “We’re going to have a lab where the expertise and the scale of that lab is sufficient to attract people who really want to do their life’s work at that intersection.”

The initiative will bring together Lilly’s world-leading expertise in the pharmaceutical industry with NVIDIA’s leadership in AI to tackle one of humanity’s greatest challenges: modeling the complexities of biology. The two companies will jointly invest up to $1 billion in talent, infrastructure and compute over five years to support the new lab, which will be based in the San Francisco Bay Area.

During the fireside chat, Ricks reflected on the painstaking work of drug discovery and AI’s potential to transform the cycle of pharmaceutical invention.

“Each small molecule discovery is like a work of art,” he said. “If we can make that an engineering problem, versus this sort of discovery, this artisanal drug-making problem, think of the impact on human life.”

The lab will operate under a scientist-in-the-loop framework, where agentic wet labs are tightly connected to computational dry labs in a continuous learning system. This framework aims to enable experiments, data generation and AI model development to continuously inform and improve one another.

“Machines are made to work day and night to solve this problem,” Ricks said.

The co-innovation lab builds on Lilly’s previously announced AI supercomputer — the biopharma industry’s most powerful AI factory, an NVIDIA DGX SuperPOD with DGX B300 systems — which will train large-scale biomedical foundation and frontier models for drug discovery and development.

By integrating AI into drug discovery, Ricks explained, pharmaceutical researchers can rapidly simulate a massive number of possible molecules, test them at scale in silico and filter out promising candidates. The next challenge is to find more biological targets using AI.

“The holy grail is that you put those two things together, and we can model the whole system at once,” Ricks said.

Huang and Ricks also discussed Lilly’s long history of harnessing computing for pharmaceutical research — and how diseases of the aging brain are the next frontier for drug discovery.

“I can’t imagine a more worthy field to apply computer science to,” Huang said. “Hopefully we can bend the arc of history.”

NVIDIA at J.P. Morgan Healthcare

NVIDIA’s full-stack AI platform is accelerating the creation and deployment of leading foundation models across digital biology and drug discovery. To recognize some of the recent advancements, Huang raised a toast at J.P. Morgan Healthcare in honor of about a dozen leaders in the field — and the AI models they’ve pioneered.

“In the last 10 years, we’ve advanced AI 1 million times,” Huang said. “I believe that over the next 10 years, you will enjoy the same adventure that I’ve enjoyed in our generation … and so for each one of you — for your happy new year present and a thank you for everything that you do for the industry and for the future of humanity — I give to you a DGX Spark.”

Over a dozen leaders in AI and drug discovery received NVIDIA DGX Spark systems signed by NVIDIA founder and CEO Jensen Huang at the J.P. Morgan Healthcare Conference.

The honorees included:

At J.P. Morgan Healthcare, NVIDIA also announced a major expansion of the NVIDIA BioNeMo platform for AI-driven biology and drug discovery with tools including:

NVIDIA also highlighted a collaboration with instrumentation leader Thermo Fisher to build autonomous lab infrastructure using NVIDIA’s full-stack AI computing — and highlighted the work of Multiply Labs, a San Francisco-based startup that offers end-to-end robotic systems to automate cell therapy manufacturing at scale.

J.P. Morgan Healthcare is the world’s largest healthcare investment symposium, attracting over 8,000 global professionals including investors, policymakers and executives from across the healthcare industry.

For more from the conference, listen to the audio recording and view the presentation deck of a special address by Kimberly Powell, vice president of healthcare at NVIDIA, who discusses AI’s impact across healthcare.

AI’s Next Revolution: Multiply Labs Is Scaling Robotics-Driven Cell Therapy Biomanufacturing Labs

NVIDIA Unveils Multi-Agent Intelligent Warehouse and Catalog Enrichment AI Blueprints to Power the Retail Pipeline

Every “that was easy” shopping moment is made possible by teams working to hit shipping deadlines, scrambling to fix missing product details and striving to provide curated shopping experiences.

Behind the scenes, workers are dealing with the reality of aging systems, siloed data and rising customer expectations — a combination that makes consistency and speed harder to deliver with every new season and added stock-keeping unit (SKU).

New Multi-Agent Intelligent Warehouse (MAIW) and Retail Catalog Enrichment NVIDIA Blueprints are designed to turn this dynamic system into an advantage. These open-source developer references, launched today, empower developers to customize AI-powered solutions for the retail value chain, from warehouse to wardrobes.

“Building with these blueprints will reduce the cost of integration and help our customers and partners enable applications fast,” said Tarik Hammadou, director of developer relations for AI for retail and consumer packaged goods at NVIDIA. “They unlock the efficiency and enterprise‑grade scale the retail industry needs to compete.”

The blueprints will be showcased next week at the National Retail Federation: Retail’s Big Show.

Easing Warehouse Workflows

Warehouses are dynamic spaces with many moving parts, from boxes carrying a variety of retail goods to massive machines and workers fulfilling thousands of daily orders. Issues can arise in an instant — from out-of-stock items to cleanups needed in aisle four.

An ongoing issue within this workspace is a disconnect between the IT and operational technology (OT) layers. This gap bars managers from easily handling problems such as accurately measuring product inventory, efficiently pinpointing technology issues and deploying enough workers to areas that need extra help.

“The idea of having an agentic AI layer on the IT or OT level is not efficient, but having agents in between IT and OT allows the AI agents to act as the coordinators,” said Hammadou.

A look inside the MAIW Blueprint.
A look inside the MAIW Blueprint.

The NVIDIA MAIW blueprint delivers a synchronized AI system that sits above existing warehouse management systems, enterprise resource planning, robotics and IoT data, so teams gain real-time, explainable operational intelligence.

The blueprint comprises specialized agents for equipment asset operations, operations coordination, safety compliance, forecasting and document processing — all orchestrated by a central warehouse operational assistant that mirrors how warehouses actually run and turns fragmented data into proactive decision-making.

For example, a supervisor can ask in natural language, “Why is packing slow?” and the assistant analyzes equipment status, tasks queues and staffing data to highlight the bottleneck, shows the supporting evidence and recommends actions such as rebalancing work or adjusting task priorities.

The blueprint also provides production-grade capabilities — including role-based access control and guardrails to keep recommendations within policy — so operations teams can trust AI to help coordinate real equipment and safety-critical decisions.

By targeting metrics to detect and resolve issues and safety incidents, as well as ensure on-time order fulfillment and service level agreement adherence — MAIW helps warehouses move from constant fire drills to more predictable, data-driven shifts.

Partners such as Kinetic Vision, a product and technology development firm, can use the MAIW blueprint to innovate and tackle decades-long issues in retail supply chains.

“Chart and graphs are yesterday, we need predictions and prescribed actions,” said Jeremy Jarrett, CEO of Kinetic Vision. “The NVIDIA MAIW blueprint would allow you to have more of a central way to answer questions and prompt decision-making.”

Resolving Sparse Product Data 

The Retail Catalog Enrichment NVIDIA Blueprint can help businesses of all sizes achieve richer and accurate product onboarding, as well as deliver localized marketing.

Retailers often face a “sparse data” problem: product images arrive with minimal or inconsistent text, and teams spend large amounts of time writing titles, descriptions and attributes, then customizing them for each market and campaign.

The blueprint addresses this by using generative AI to create high-quality, structured, localized and brand-aligned product content at scale.

For example, imagine a home goods retailer trying to update their online storefront with a basic set of ceramic mug photos. With an NVIDIA Nemotron vision language model (VLM), part of the Retail Catalog Enrichment Blueprint, the photos can be fed through the VLM to develop product metadata such as color, material, capacity, style and use cases.

From a single image, the system can then generate localized product titles and descriptions, extract and normalize attributes for search and recommendation systems for improved SEO and GEO, and create culturally relevant 2D lifestyle imagery and interactive 3D assets. Behind the scenes, an AI “judge” checks outputs for quality and consistency.

In addition, the Retail Catalog Enrichment Blueprint can create rich, on-brand marketing content by applying brand voice, tone and taxonomy instructions via prompts, alongside the product image and a target locale. The blueprint uses those brand guidelines to generate enriched product titles and descriptions, localized categories and tags, and culturally appropriate lifestyle image variations tailored to that intent.

Grid Dynamics’ NVIDIA Blueprint-Powered Solution 

Companies are already creating their own products with the help of NVIDIA’s retail blueprints.

Global tech consulting firm Grid Dynamics has built a catalog enrichment and management system that increases the accuracy of item content and status of SKUs for large retailers, using the Retail Catalog Enrichment NVIDIA Blueprint.

“The quality of the search and the quality of the browsing experience for customers directly depends on the quality of the catalog data,” said Ilya Katsov, chief technology officer of Grid Dynamics. “It’s a very critical problem for all retailers with a digital presence to ensure their catalogs have as rich and consistent of attributes as possible — and our solution automates this so they don’t need to do manual reviews.”

For bigger retailers with massive product catalogs, attributes can be missing or incorrect. Onboarding new vendors with differing catalog structures can further jumble the data — leading to inaccurate sales, frustration and, eventually, a loss of customer loyalty.

This is where Grid Dynamics’ solution comes into play.

“Our solution makes product catalogs more discoverable while giving brands the ability to enforce their business rules at scale,” said Dan Guja, principal software engineer at Grid Dynamics. “With AI-driven business rules applied across the catalog, brands can improve data quality, sharpen customer intent signals and surface products customers actually want.”

Piecing Together the NVIDIA Retail Pipeline 

The MAIW and Catalog Enrichment NVIDIA Blueprints are part of a greater initiative to reimagine the warehouse-to-consumer workflow with AI infrastructure at each level.

On the backend, the MAIW blueprint helps managers and warehouse workers in their daily supply and data management tasks, while the Catalog Enrichment NVIDIA Blueprint lets digital teams easily curate stylized SKU pages at the click of a button. Plus, the Nemotron-Personas-USA open-source dataset can be used in the development and training of solutions, improtving the diversity of synthetically generated data on a variety of shopper demographics.

On the front end, the previously released agentic NVIDIA Retail Shopping Assistant Blueprint can make product discovery and customers’ shopping experience conversational, more effortless and enjoyable by serving as a retail expert.

“The next step is embedding a physical AI layer into warehouse and store operations, enabling intelligent agents to see, reason, and act on real-world inventory and supply-chain challenges,” said Tarik Hammadou. “By training physical agents with capabilities like computer vision, we’re moving toward more adaptive and autonomous operations.”

Learn more about the MAIW and Retail Catalog Enrichment blueprints on the NVIDIA Technical Blog.

AI Copilot Keeps Berkeley’s X-Ray Particle Accelerator on Track

In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator.

Researchers at the Lawrence Berkeley National Laboratory ALS facility recently deployed the Accelerator Assistant, a large language model (LLM)-driven system to keep X-ray research on track.

The Accelerator Assistant — powered by an NVIDIA H100 GPU harnessing CUDA for accelerated inference — taps into institutional knowledge data from the ALS support team and routes requests through Gemini, Claude or ChatGPT. It writes Python and solves problems, either autonomously or with a human in the loop.

This is no small task. The ALS particle accelerator sends electrons traveling near the speed of light in a 200-yard circular path, emitting ultraviolet and X-ray light, which is directed through 40 beamlines for 1,700 scientific experiments per year. Scientists worldwide use this process to study materials science, biology, chemistry, physics and environmental science.

At the ALS, beam interruptions can last minutes, hours or days, depending on the complexity, halting concurrent scientific experiments in process. And much can go wrong: the ALS control system has more than 230,000 process variables.

“It’s really important for such a machine to be up, and when we go down, there are 40 beamlines that do X-ray experiments, and they are waiting,” said Thorsten Hellert, staff scientist from the Accelerator Technology and Applied Physics Division at Berkeley Lab and lead author of a research paper on the groundbreaking work.

Until now, facility staff troubleshooting issues have had to quickly identify the areas, retrieve data and gather the right personnel for analysis under intense time pressure to get the system back up and running.

“The novel approach offers a blueprint for securely and transparently applying large language model-driven systems to particle accelerators, nuclear and fusion reactor facilities, and other complex scientific infrastructures,” said Hellert.

The research team demonstrated that the Accelerator Assistant can autonomously prepare and run a multistage physics experiment, cutting setup time and reducing efforts by 100x.

Applying Context Engineering Prompts to Accelerator Assistant

The ALS operators interact with the system through either a command line interface or Open WebUI, which enables interaction with various LLMs and is accessible from control room stations, as well as remotely. Under the hood, the system uses Osprey, a framework developed at Berkeley Lab to apply agent-based AI safely in complex control systems.

Each user is authenticated and the framework maintains personalized context and memory across sessions, and multiple sessions can be managed simultaneously. This allows users to organize distinct tasks or experiments into separate threads. These inputs are routed through the Accelerator Assistant, which makes connections to the database of more than 230,000 process variables, a historical database archive service and Jupyter Notebook-based execution environments.

“We try to engineer the context of every language model call with whatever prior knowledge we have from this execution up to this point,” said Hellert.

Inference is done either locally — using Ollama, which is an open-source tool for running LLMs with a personal computer, on an H100 GPU node located within the control room network — or externally with the CBorg gateway, which is a lab-managed interface that routes requests to external tools such as ChatGPT, Claude or Gemini.

The hybrid architecture balances secure, low-latency, on-premises inference with access to the latest foundation models. Integration with EPICS (Experimental Physics and Industrial Control System) enables operator-standard safety constraints for direct interaction with accelerator hardware. EPICS is a distributed control system used in large-scale scientific facilities such as particle accelerators. Engineers can write Python code in Jupyter Notebook that can communicate with it.

Basically, conversational input is turned into a clear natural language task description for objectives without redundancy. External knowledge such as personalized memory tied to users, documentation and accelerator databases are integrated to assist with terminology and context.

“It’s a large facility with a lot of specialized expertise,” said Hellert. “Much of that knowledge is scattered across teams, so even finding something simple — like the address of a temperature sensor in one part of the machine — can take time.”

Tapping Accelerator Assistant to Aid Engineers, Fusion Energy Development

Using the Accelerator Assistant, engineers can start with a simple prompt describing their goal. Behind the scenes, the system draws on carefully prepared examples and keywords from accelerator operations to guide the LLM’s reasoning.

“Each prompt is engineered with relevant context from our facility, so the model already knows what kind of task it’s dealing with,” said Hellert.

Each agent is an expert in that field, he said.

Once the task is defined, the agent brings together its specialized capabilities — such as finding process variables or navigating the control system — and can automatically generate and run Python scripts to analyze data, visualize results or interact safely with the accelerator itself.

“This is something that can save you serious time — in the paper, we say two orders of magnitude for such a prompt,” said Hellert.

Looking ahead, Hellert aims to have the ALS engineers put together a wiki that documents the many processes that go on to support the experiments. These documents could help the agents run the facilities autonomously — with a human in the loop to approve the course of action.

“On these high-stakes scientific experiments, even if it’s just a TEM microscope or something that might cost $1 million, a human in the loop can be very important,” said Hellert.

The work has already expanded beyond ALS as part of the DOE’s Genesys mission, with the framework being deployed across U.S. particle accelerator facilities. Next up, Hellert just began collaborating with engineers at the ITER fusion reactor — the world’s largest — in France for implementing the framework for use in the fusion reactor facility. He also has a collaboration in the works with the Extremely Large Telescope ELT, in northern Chile.

Benefiting Humanity: Scientific Impact of Experiments Supported by ALS

Beyond optimizing the accelerator and other industrial operations, the work at the ALS directly enables scientific breakthroughs with global impact. The facility’s stable X-ray beams underpin research in health, climate resilience and planetary science.

During the COVID-19 pandemic, ALS researchers helped characterize a rare antibody that could neutralize SARS-CoV-2. Structural biology experiments at Beamline 4.2.2 revealed how six molecular loops of the antibody latch onto and disable the viral spike protein. The findings supported the rapid development of a therapeutic that remained effective through multiple variants.

ALS science also contributes to climate-focused research. Metal-organic frameworks (MOFs) — a class of porous materials capable of capturing water or carbon dioxide from air — were extensively studied across several ALS beamlines. These experiments supported foundational work that ultimately led to the 2025 Nobel Prize in Chemistry, recognizing the transformative potential of MOFs for sustainable water harvesting and carbon management.

In planetary science, ALS measurements of samples returned from NASA’s OSIRIS-REx mission helped trace the chemical history of asteroid Bennu. X-ray analyses provided evidence that such asteroids carried water and molecular precursors of life to early Earth, deepening our understanding of the origins of the planet’s habitable conditions.

 

Japan Science and Technology Agency Develops NVIDIA-Powered Moonshot Robot for Elderly Care

The next universal technology since the smartphone is on the horizon — and it may be a little less pocket friendly.

The Moonshot research program, funded by the Japan Science and Technology Agency and accelerated by NVIDIA AI and robotics technologies, is working to create a world by 2050 where AI-powered, autonomously learning robots are integrated into Japanese citizens’ everyday lives.

That’s just goal No. 3 of the broader Moonshot initiative, which includes researchers from across Japan’s universities and comprises 10 ambitious technology goals — from ultra-early disease prediction to sustainable resource circulation.

In light of Japan’s rising elderly population, many of the research projects underway center on how robots can aid in senior care. This includes designing a robot that’s capable of caregiving tasks like cooking, cleaning and hygiene care.

NVIDIA Architecture Powers On Moonshot Robots

NVIDIA technologies are integrated into every level of the Moonshot project’s senior care robots known as AI-Driven Robot for Embrace and Care, or AIREC.

Dry-AIREC robot, the larger and more mobile member of the Moonshot family, has two NVIDIA GPUs onboard. For AIREC-Basic, primarily used for data collection for the motion foundation model, three NVIDIA Jetson Orin NX modules power AI processing at the edge.

Pictured are two Moonshot robots AIREC-Basic (left) and AIREC-Basic (right).
Pictured is AIREC-Basic (left) and AIREC-Basic (right).

Plus, NVIDIA Isaac Sim, an open-source robotic simulation framework, was used to train the AIREC robots to perform specific tasks, such as estimating the forces between objects.

The integration of NVIDIA technologies and AI into the robot development process has allowed this project to go from a far-fetched dream to reality faster than imagined.

“Five years ago, before generative AI, few people believed that this application was possible,” said Tetsuya Ogata, professor and director of the Institute for AI and Robotics at Waseda University. “Now, the atmosphere surrounding this technology has changed, so we can seriously think about this kind of application.”

Building a Full Set of Caregiving Capabilities

Additional research projects are underway to develop the Moonshot robot’s elderly-care capabilities.

“We’re focusing on things like changing diapers, helping patients take baths and providing meal assistance, so those actions can be supported by the robots, and caregivers can focus on improving the patients’ lives,” said Misa Matsumura, a bioengineering master’s student at the University of Tokyo.

A recent paper by Matsumura — presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems — focused on repositioning, an essential action in elderly care to prevent bed sores and enable diaper changing.

Automating repositioning with a humanoid robot — while considering the elderly care patients’ personal states and bodily needs — is no easy feat.

To train the Dry-AIREC robots for this research endeavor, Matsumura’s team used laptops powered by NVIDIA RTX GPUs.

Matsumura used 3D posture estimation, trajectory calculations and force estimation to further develop the robots’ capabilities.

Dry-AIREC’s fisheye and depth cameras helped assess the movements required to reposition patients. The exact repositioning method needed for a patient is found through trajectory calculations based on movement data from skilled caregivers.

The robot must also use the right amount of force in repositioning to complete the action without causing the patient pain. By predicting the pressure required to press the shoulders and knees, it determines the appropriate timing for movement — enabling actions with the ideal applied force.

Preliminary experiments were done using mannequins, and Matsumura’s research has now advanced to incorporate humans testing the robots. Matsumura is conducting ongoing research to further improve this action for Dry-AIREC.

Three illustrations of Moonshot robots performing caregiving activities including folding laundry, cooking food and washing a patient.
Milestone images for goal No. 3 for the Japan Science and Technology Agency.

Among the many projects within the Moonshot program, developing robots for elderly care has particular significance for some of the researchers due to the project’s social and personal implications.

“Although my study focus is on medical robotics, I decided to join this project because my mother is growing older, and that experience has given me an appreciation for the importance of personal care,” said Etsuko Kobayashi, professor of bioengineering at the University of Tokyo and Matsumura’s graduate advisor. “I found that my experience in medical robotics can be meaningfully extended to care robotics, contributing to the development of safe and reliable robotic systems for human-centered applications.”

The Moonshot team for goal No. 3 will showcase their progress at the 2026 International Symposium on System Integration in January.

Learn more about NVIDIA Isaac Sim.

More Ways to Play, More Games to Love — GeForce NOW Wraps CES With Linux Support, Fire TV App, Flight Stick Controls

NVIDIA is wrapping up a big week at the CES trade show with a set of GeForce NOW announcements that are bringing more ways to play and more games to the cloud.

From new native apps for Linux and Amazon Fire TV streaming sticks to hands-on throttle-and-stick (HOTAS) support for simulation fans and a new single sign-on option, GeForce NOW is expanding across devices and making it easier to jump straight into gameplay.

Topping it off, a new wave of AAA titles — including IO Interactive’s 007 First Light, Capcom’s Resident Evil Requiem and Pearl Abyss’ Crimson Desert — is gearing up to join the GeForce NOW library for high-fidelity, low-latency streaming across the globe.

Get ready with the six new games available to stream this week.

Leveling Up Where Gamers Play

GFN family of devices
The family of GeForce NOW supported devices is growing.

A new native GeForce NOW app is launching in beta for Linux, starting with Ubuntu 24.04 and later. This answers a long-standing request from the gaming community, turning compatible systems into RTX-powered gaming rigs that can stream at up to 5K resolution and 120 frames per second (fps) or 1080p 360 fps. With rendering handled in the cloud, Linux gamers can tap into ray tracing, NVIDIA DLSS and other RTX features without needing a local, high-end GPU.​

Amazon Fire TV stick with GFN
Things are heating up: GeForce NOW on Fire TV streaming sticks.

A new native app for Amazon Fire TV streaming sticks transforms the tiny device into a big-screen cloud gaming endpoint, giving members another way to stream their PC libraries straight to the TV with a gamepad in hand. It builds on existing device support, making it easier for gamers to pick up and play from the couch without a console or gaming PC attached.​

Clearer Skies Ahead 

GFN flight controls
Cleared for takeoff in the cloud.

Flight control support has landed in the cloud, letting virtual pilots connect their favorite gear — including industry-leading Thrustmaster and Logitech setups — to custom HOTAS configurations. Whether locking in a full desktop throttle-and-stick rig or fine-tuning a split cockpit arrangement, everything’s ready for takeoff.

Paired with the incredible RTX 5080-powered performance available to Ultimate members, plus ultralow-latency streaming and NVIDIA Reflex in supported titles, pilots can now enjoy ultra-precise, hyper-immersive flight experiences straight from the cloud. Soaring over detailed landscapes in Microsoft Flight Simulator 2024, dogfighting in War Thunder or exploring the stars in Elite Dangerous has never felt smoother.

Easier Sign-Ins, Faster Game Time

GeForce NOW now supports new games — and more ways to get into them faster.

Gaijin SSO on GeForce NOW
Because remembering passwords is so last season.

Battle.net single sign-on recently joined the service, letting members jump straight into supported titles without juggling extra credentials each time. And coming soon early this year, Gaijin.net integration will make takeoff even smoother — one simple sign-on connects more game libraries across devices.

Fewer passwords and pop-ups mean more time playing — whether on a desktop, laptop, handheld device or TV.

Next Up in the Cloud

A new wave of AAA titles is headed to GeForce NOW, bringing spies, survivors and freeswords to the cloud for members to stream across their favorite devices. The games each bring a distinct flavor — from stealthy espionage to nerve-shredding horror and gritty fantasy combat — and they’re all gearing up to join an already-stacked GeForce NOW library when they launch on PC.

007 on GeForce NOW
Earn the number.

IO Interactive’s 007 First Light brings a modern James Bond origin story to the cloud when the PC title launches May 27, inviting members to step into the shadows as Bond begins his journey. Experience a balanced mix of stealth and action built around a “breathing” gameplay loop with IOI’s cinematic, set-piece-driven storytelling. Approach encounters through stealth, direct action or creative improvisation, with multiple viable paths to each objective that let players define Bond’s style.

Resident Evil Requiem
Welcome to the Wrenwood Hotel — late checkout is not recommended.

Capcom’s Resident Evil Requiem continues the legendary survival-horror saga with a new chapter that doubles down on tense exploration, eerie environments and resource-tight combat. Expect dimly lit corridors, unsettling encounters and that classic feeling of weighing every bullet and healing item before moving forward when playing as Grace, and exhilarating, death-defying action when playing as Leon.

Active Matter on GeForce NOW
Everything matters in the cloud.

Gaijin Entertainment’s Active Matter, coming this year, is a realistic military shooter where players join dangerous raids for loot or intense player vs. player battles set in a fractured multiverse. Active matter can be harvested from active matter-transformed creatures or seized from other players. Join this never-ending hunt to improve the chances to find the way out of the time loop.

Crimson Desert on GeForce NOW
Open world, closed casket.

Pearl Abyss’ Crimson Desert is a stunning open-world action adventure set in a war-torn fantasy land, pairing large-scale exploration with cinematic storytelling and hard-hitting, combo-focused combat. One moment is a quiet ride across windswept plains, the next is a chaotic clash against towering foes with mystical power effects lighting up the battlefield.

Playing these titles on GeForce NOW means jumping into big-budget experiences with RTX 5080-class performance, high frame rates and advanced graphics features on a wide range of devices, without worrying about downloads, patches or local hardware requirements. Members can pick up the same adventure on a desktop, laptop, handheld device or TV, letting each of these worlds shine wherever and however they choose to play.

Hello New Games

Pathological on GeForce NOW
Welcome back to the plague.

The eerie world of Pathologic 3, from Ice-Pick Lodge and tinyBuild, invites players to step once again into a haunting town teetering on the edge of disaster. Time is fleeting, trust is scarce and every conversation feels like a test of loyalty — or survival. With its signature blend of tension, philosophy and dark humor, Pathologic 3 doesn’t just tell a story; it dares players to live through it.

In addition, members can look for the following:

GeForce RTX 5080-ready game:

What are you planning to play this weekend? Let us know on X or in the comments below.

Steel, Sensors and Silicon: How Caterpillar Is Bringing Edge AI to the Jobsite

From Warehouse to Wallet: New State of AI in Retail and CPG Survey Uncovers How AI Is Rewiring Supply Chains and Customer Experiences

AI has transformed retail and consumer packaged goods (CPG) operations, enhancing customer analysis and segmentation to enable greater personalization for marketing and advertising, and boosting the speed and accuracy of demand forecasting for supply chains and logistics.

Companies are also raising the bar for customer engagement through intelligent digital shopping assistants and catalog enrichment by dynamically enhancing and localizing product information. AI agents are increasing the speed and efficiency of operations, while physical AI systems are helping streamline and automate warehouse and supply chain operations.

NVIDIA’s third annual State of AI in Retail and Consumer Packaged Goods survey report, which garnered hundreds of responses, showed maturation of AI within the industry as companies move AI projects from pilot to production in all areas.

Highlights of this year’s report include:

Read more below on some of the report’s key findings.

Open Source Opens Opportunities

Open source has quickly become the foundation of many retail AI systems, giving teams the flexibility to adapt models to their data and use cases while maintaining strong governance. Open, interoperable ecosystems also make it easier to plug AI into existing tools and workflows, helping retailers rapidly scale innovation.

“Most retailers first started experimenting with AI using proprietary AI vendors,” said Jason Goldberg, chief commerce strategy officer of Publicis Groupe. “They had the models, but they didn’t own the keys to their own kingdom. Open source flips that script, allowing retailers to leverage their proprietary data, avoid vendor lock-in and benefit from open-source community innovation.”

AI Unlocks Significant Business Impact

With 91% of respondents saying their companies are either actively using or assessing AI, the competitive question in retail and CPG has shifted from whether or not to invest in AI, to how to most effectively deploy and scale AI.

Across the industry, the business impact of AI has been tangible and significant. When asked how AI has improved their business, 54% cited improved employee productivity; 52% said AI has helped to create operational efficiencies; and 41% reported improved customer service.

As stated above, 89% of respondents said AI has helped to increase revenue. For many companies, that increase has been significant, with 30% stating revenue has increased by more than 10%. The story is the same for AI’s role in helping to decrease annual costs, with 95% agreeing AI has reduced costs and 37% saying costs have been reduced by more than 10%.

“What executives should be focused on is not green-lighting vanity projects at the expense of high-ROI wins,” said Chris Walton, co-CEO of Omni Talk. “The retailers who will succeed will start with boring use cases that solve specific P&L problems, prove the value, then scale.”

AI investment, including infrastructure, hiring AI experts and software, will increase next year, according to nine out of 10 survey respondents. And half of respondents said the increase could be significant, with budgets increasing 10% or more year over year.

Agentic AI Makes Big Debut in Retail

The retail and CPG industry is piloting AI agents across lines of business.

Overall, 47% of survey respondents said they’re using or assessing agentic AI — with 20% saying AI agents are already active in their organizations and another 21% reporting agents are coming within the next year.

“The truly disruptive impact of agentic AI will hit retail supply chains and operations first, such as autonomous agents handling real-time inventory rebalancing, dynamic pricing and vendor negotiations at scale, because that’s where the ROI is measurable,” said Walton.

Survey respondents identified three clear goals for agentic AI in retail and CPG:

Broadly speaking, agentic AI will be spread across three operational lines: internal operations, employee and customer support, and customer engagement. For instance, in customer engagement, agents go beyond analytics and act on insights in real time, adjust messages, recommend products and guide purchase decisions based on individual customer contexts.

AI Providing Resilience to the Supply Chain

Retail and CPG have faced intense supply chain challenges this decade, and those challenges are only growing more complex. Sixty-four percent of respondents in this year’s survey reported increased challenges in the supply chain year over year, such as geopolitical instability, labor constraints, evolving consumer expectations for speed and transparency, and regulatory complexity across global operations.

“AI lets retailers optimize inventory at the store and customer level rather than at a regional level,” said Goldberg. “AI allows retailers to incorporate many more factors in their demand forecasts, and much more accurately predict and avoid out of stocks, by much more accurately matching supply to demand.”

The industry is turning to AI to streamline operations and solve complexity. The top pressure valve is using AI for supply chain operational efficiency and throughput, according to 51% of respondents. Meeting customer expectations was next on the list at 45%, and solving for traceability and transparency was third, per 38%.

Physical AI is gaining ground in the industry, with 17% of respondents using or evaluating the technology.

“The real transformation will come from AI that makes existing physical infrastructure smarter,” said Walton. “My favorite example is in-store robotics. Through them, you get better pricing, better inventory, management and better presentation quality.”

The early movers demonstrate that, when integrated thoughtfully, physical AI systems deliver more than task automation, enhancing flexibility and throughput in response to workforce pressures and rising logistical complexity.

Download the “State of AI in Retail and CPG: 2026 Trends” report for in-depth results and insights.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms for retail.

NVIDIA Brings GeForce RTX Gaming to More Devices With New GeForce NOW Apps for Linux PC and Amazon Fire TV

Announced at the CES trade show running this week in Las Vegas, NVIDIA is bringing more devices, more games and more ways to play to its GeForce NOW cloud gaming service. Powered by GeForce RTX 5080-class performance on the NVIDIA Blackwell RTX platform, GeForce NOW Ultimate continues to raise the bar for PC gamers streaming from the cloud.

GeForce RTX 5080-powered servers are live globally for Ultimate members, delivering up to 5K resolution 120 frames-per-second (fps) streaming and up to 360 fps at 1080p with NVIDIA Reflex technology support for ultralow-latency, competitive play. Cinematic-Quality Streaming mode enhances image clarity and text sharpness for visually rich single-player adventures on nearly any screen.

New this year, GeForce NOW is expanding that performance to more platforms than ever, headlined by a native Linux PC app and a new app for Amazon Fire TV sticks.

Flight-simulation fans are also getting flight controls support, and members everywhere gain faster access to more games thanks to new single sign-on integrations and upcoming AAA titles joining the cloud.

Here Come the Platforms

Linux PCs and Amazon Fire TV sticks are joining the GeForce NOW native app family, unlocking new ways to play in the cloud across desktops and living rooms.

These new apps build on GeForce NOW’s existing support for Windows PCs, macOS, Chromebooks, mobile devices, smart TVs, virtual-reality devices and handhelds, all tapping into the same GeForce RTX 5080-class performance wherever members log in.

GeForce NOW on Linux
Turn your Linux PC into an RTX gaming rig.

A new native GeForce NOW app for Linux PCs, supported with Ubuntu 24.04 and later distributions, answers one of the top requests from the PC gaming community. Linux users can transform their compatible systems into GeForce RTX-powered gaming rigs, streaming supported PC titles from the cloud at up to 5K and 120 fps or 1080p 360 fps.

With rendering handled in the cloud, high-end PC gaming is possible on Linux operating systems, breathing new life into older devices. Members can enjoy ray tracing, NVIDIA DLSS 4 and other RTX technologies without needing a local high-performance GPU. The app is designed to bring a seamless, native experience that fits naturally into Linux desktop workflows while giving access to the expansive GeForce NOW library, turning everyday Linux devices into RTX gaming powerhouses.

The app is expected to enter a beta release early this year.

GeForce NOW on Amazon Fire TV
Game on in the living room.

A new native GeForce NOW app for select Amazon Fire TV sticks — starting with the Fire TV Stick 4K Plus (2nd Gen) and Fire TV Stick 4K Max (2nd Gen) — can bring RTX-powered PC gaming to another big screen in the home. Members can stream their compatible PC game libraries directly to Fire TV-connected displays to turn a compact streaming stick into a powerful cloud gaming rig.

With support for gamepads and GeForce NOW’s familiar interface, Fire TV users can jump into their favorite supported games without a console or gaming PC attached to the TV. This builds on existing TV support and helps make GeForce NOW the easiest way to bring high-performance PC gaming into the living room.

The app is expected to be available in countries where compatible Amazon Fire TV sticks and GeForce NOW are offered and will be launching early this year.

Take Flight

Flight control support on GeForce NOW
No fight, just flight in the cloud.

GeForce NOW turns more devices into powerful cloud gaming rigs, and CES this year brings another of the community’s most-requested additions.

Simulation fans are getting a major upgrade with flight controls support on GeForce NOW. Popular flight sticks and throttle systems from leading brands such as Thrustmaster and Logitech can be used as affixed hands-on throttle-and-stick desktop units or as separately mounted stick-and-throttle setups for custom cockpits.

Combined with RTX 5080 performance, ultralow-latency streaming and NVIDIA Reflex in supported titles, flight controls let virtual pilots experience greater precision and deeper immersion in their favorite flight- and space-simulation games — including Microsoft Flight Simulator 2024, Elite Dangerous and War Thunder. Members can build out detailed simulation setups at home while streaming the heavy lifting from the cloud when it launches early this year..

Blockbusters in the Cloud

The GeForce NOW catalog includes thousands of supported games from top PC stores like Steam, Epic Games Store, Xbox and others, with more joining every week. Backed by RTX 5080-class performance, members can stream everything from competitive shooters to expansive role-playing games with high frame rates, advanced graphics features and low latency.​

New AAA titles such as IO Interactive’s 007 First Light, Capcom’s Resident Evil Requiem, Pearl Abyss’ Crimson Desert, and Gaijin Entertainment’s Active Matter are coming to GeForce NOW when they launch on PC, adding to an already robust lineup of new releases and fan favorites.

AAA games coming to GeForce NOW
License to stream.

007 First Light drops players into a modern James Bond origin story filled with stealth, espionage and cinematic action. Resident Evil Requiem continues the iconic survival-horror series with a new protagonist facing terrifying threats in a chilling new setting. Crimson Desert blends open-world exploration, cinematic storytelling and intense combat in a richly detailed fantasy world. Active Matter from Gaijin is a realistic military shooter where players join dangerous raids for loot or intense player vs. player battles set in a fractured multiverse.

Members can look forward to seeing these and other upcoming hits arrive on the service, with updates shared regularly on GFN Thursdays.

One Login, Many Worlds

Gaijin SSO on GeForce NOW
Just sign in once. The rest is game history.

GeForce NOW is also making it faster to jump into gaming with new account and platform integrations. Recent updates introduced Battle.net automatic sign-in, letting members connect their accounts and access supported titles more quickly.

That seamless experience is expanding to additional game stores, with Gaijin.net set to soon support automatic sign-in on GeForce NOW early this year. Members will be able to authenticate once and jump into War Thunder and other titles with fewer steps.

Learn more about the latest NVIDIA-powered innovations at CES, running through Friday, Jan. 9.

See notice regarding software product information.