From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine

NVIDIA is collaborating with clinical organizations across Europe to bring AI to the point of care, bolstering clinical pathways with efficiency gains and new data dimensions that can be included in medical decision-making processes.

The University Hospital Essen, in northwestern Germany, is one such organization taking machine learning from the bits to the bedside — using NVIDIA technology and AI to build smart hospitals of the future.

Jens Kleesiek and Felix Nensa, professors at the School of Medicine of the University of Duisburg Essen, are part of a four-person team leading the research groups that established the Institute for Artificial Intelligence in Medicine (IKIM). The technology developed by IKIM is integrated with the IT infrastructure of University Hospital Essen.

IKIM hosts a data annotation lab, overseen by a team of board-certified radiologists, that accelerates the labeling of anatomic structures in medical images using MONAI, an open-source, PyTorch-based framework for building, training, labeling and deploying AI models for healthcare imaging.

MONAI was created by NVIDIA in collaboration with over a dozen leading clinical and research organizations, including King’s College London.

IKIM researchers also use self-supervised learning to pretrain AI models that generate high-quality labels for the hospital’s CT scans, MRIs and more.

Additionally, the IKIM team has developed a smart hospital information platform, or SHIP, an AI-based central healthcare data integration platform and deployment engine. The platform is used by researchers and clinicians to conduct real-time analysis of the slew of data in university hospitals — including medical imaging, radiology reports, clinic notes and patient interviews.

SHIP can, for example, flag an abnormality on a radiology report and notify physicians via real-time push notifications, enabling quicker diagnoses and treatments for patients. The AI can also pinpoint data-driven associations between healthcare metrics like genetic traits and patient outcomes.

“We want to solve real-world problems and bring the solutions right into the clinics,” Kleesiek said. “The SHIP framework is capable of delivering deep learning algorithms that analyze data straight to the clinicians who are at the point of care.”

Plus, increased workflow efficiency — enabled by AI — means increased sustainability within hospitals.

Making Hospitals Smarter

Nensa says his hospital currently has close to 500 IT systems, including those for hospital information, laboratories and radiology. Each consists of critical patient information that’s interrelated — but data from disparate systems can be difficult to connect or draw machine learning-based insights from.

SHIP connects the data from all such systems by automatically translating it into a description standard called fast healthcare interoperability resources, or FHIR, which is commonly used in medicine to exchange electronic health records. SHIP currently encompasses more than 1.2 billion FHIR.

Once converted to FHIR, the information can be easily accessed by data scientists, researchers and clinicians for real-time AI training and analysis based on NVIDIA GPUs and DGX A100 systems. This makes it possible for labor-intensive tasks, such as liver volumetry prior to living donor liver transplantation or bone age estimation in children, to be performed fully automatically in the background, instead of requiring a half-hour of manual work by a radiologist.

“The more artificial intelligence is at work in a hospital, the more patients can enjoy human intelligence,” Nensa said. “As AI provides doctors and nurses relief from repetitive tasks like data retrieval and annotation, the medical professionals can focus on what they really want to do, which is to be there and care for their patients.”

NVIDIA DGX A100 systems power IKIM’s AI training and inference. NVIDIA Triton Inference Server enables fast and scalable concurrent serving of AI models within the clinic.

The IKIM team also uses NVIDIA FLARE, an open-source platform for federated learning, which allows data scientists to develop generalizable and robust AI models while maintaining patient privacy.

Smarter Equals Greener

In addition to reducing physician workload and increasing time for patient care, AI in hospitals boosts sustainability efforts.

As a highly specialized medical center, the University Hospital Essen must be available year-round for reliable patient treatment, with 24-hour operation times. In this way, patient-oriented, cutting-edge medicine is traditionally associated with a high consumption of energy.

SHIP helps hospitals increase efficiency, automating tasks and optimizing processes to reduce friction in the workflow — which saves energy. According to Kleesiek, IKIM reuses the energy emitted by GPUs in the data center, which also helps to make the University Hospital Essen greener.

“NVIDIA is providing all of the layers for us to get the most out of the technology, from software and hardware to training led by expert engineers,” Nensa said.

In April, NVIDIA experts hosted a workshop at IKIM, featuring lectures and hands-on training on GPU-accelerated deep learning, data science and AI in medicine. The workshop led IKIM to kickstart additional projects using AI for medicine — including a research contribution to MONAI.

In addition, IKIM is building SmartWard technology to provide an end-to-end AI-powered patient experience in hospitals, from service robots in waiting areas to automated discharge reports.

For the SmartWard project, the IKIM team is considering integrating the NVIDIA Clara Holoscan platform for medical device AI computing.

Subscribe to NVIDIA healthcare news and watch IKIM’s NVIDIA GTC session on demand.

Feature image courtesy of University of Duisburg-Essen.

The post From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine appeared first on NVIDIA Blog.

Powered Up: 5G and VR Accelerate Vehicle Battery Design

Traveling the scenic route between Wantage, a small town in Oxfordshire, and Coventry in the U.K. meanders up steep hills, past the birthplace of Shakespeare and skirts around 19th-century English bathhouses.

A project using edge computing and the world’s first 5G-enabled VR technology is enabling two engineering teams in those locales, about 70 miles apart, to collaborate as if they were in the same room.

The project is taking place at Hyperbat, the U.K.’s largest independent electric vehicle battery manufacturer. The company’s engineers are able to work simultaneously on a 1:1-scale digital twin of an EV battery.

They can immerse themselves in virtual tasks that mimic real life thanks to renders created using NVIDIA GPUs, RTX Virtual Workstation software and NVIDIA CloudXR technology. The digital transformation results in reduced inefficiencies and faster design processes.

Working in a New Reality

The team at Hyperbat, in partnership with BT, Ericsson, the GRID Factory, Masters of Pie, Qualcomm and NVIDIA, has developed a proof of concept that uses VR to power collaborative sessions.

Using a digital twin with VR delivers greater clarity during the design process. Engineers can work together from anywhere to effectively identify and rectify errors during the vehicle battery design process, making projects more cost-effective.

“This digital twin solution at Hyperbat is the future of manufacturing,” said Marc Overton, managing director of Division X, part of BT’s Enterprise business. “It shows how a 5G private network can provide the foundation for a whole host of new technologies which can have a truly transformative effect in terms of collaboration, innovation and speeding up the manufacturing process.”

See Hyperbat’s system in action:

Masters of Pie’s collaboration engine, called Radical, delivers a real-time extended reality (XR) experience that allows design and manufacturing teams to freely interact with a 3D, lifesize model of an electric vehicle battery. This gives the Hyperbat team a single source of truth for each project — no need for numerous iterations.

The 5G-enabled VR headset, powered by the Qualcomm Snapdragon XR2 platform, gives the team an untethered experience that can be launched with just one click. Designed specifically to address all the challenges of extended reality, it doesn’t require a lengthy setup, nor the importing and exporting of data. Designers can put on their headsets and get straight to work.

Speed Is Key

5G’s ultra-low latency, deployed using an Ericsson radio and private 5G network at Hyperbat, provides faster speeds and more reliable connections, as well as immediate response times.

Combining 5G with the cloud and XR removes inefficiencies in design processes and speeds up production lines, improvements that could greatly benefit the wider manufacturing sector.

And using Project Aurora — NVIDIA’s CloudXR and RTX Virtual Workstation software platform for XR streaming at the edge of the 5G network — large amounts of data can be rapidly processed on remote computers before being streamed to VR headsets with ultra-low latency.

Innovation on a New Scale

AI is reshaping almost every industry. VR and augmented reality provide windows for AI in industry and new design possibilities, with 5G making the technology more accessible.

“Hyperbat’s use case is another demonstration of how 5G and digitalization can really help boost the U.K.’s economy and industry,” said Katherine Ainley, CEO of Ericsson U.K. and Ireland. This technology “can really drive efficiency and help us innovate on a whole new scale,” she said.

Learn more about NVIDIA CloudXR.

The post Powered Up: 5G and VR Accelerate Vehicle Battery Design appeared first on NVIDIA Blog.

3D Environment Artist Jacinta Vu Sets the Scene ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D environment artist Jacinta Vu joins us In the NVIDIA Studio this week, showcasing her video-game-inspired scene Royal Library and 3D content creation workflow.

Based in Cincinnati, Vu specializes in transforming 2D concept art into 3D models and scenes, a critical contribution she made to The Dragon Prince from Wonderstorm Games.

Vu’s work emits a variety of colors and textures, from models to fully fleshed-out scenes.

Her artistic endeavors often start by drawing low-poly game assets by hand that look like beautiful paintings, her original intention in stylizing Royal Library.

“Around the time of Royal Library, my style was very hand-painted and I wanted to work more towards League of Legends and World of Warcraft styles,” Vu said. “My vision for this project, however, was very different. Royal Library is based on concept art and very different if you compare it.”

Fine attention to detail on individual models is the foundation of creating a stunning scene.

Vu began her creative workflow crafting 3D models in Autodesk Maya, slowly building out the larger scene. Deploying her GeForce RTX 2080 GPU unlocked the GPU-accelerated viewport, enabling Vu’s modeling and animation workflows to be faster and more interactive. This left her free to ideate and unlock creativity, all while saving valuable time.

“Being able to make those fast, precise tweaks was really nice,” Vu said. “Especially since, when you’re making a modular kit for an interior versus an exterior, there is less room to mess up because buildings are made to be perfect structurally.”

Practice makes perfect. The NVIDIA Studio YouTube channel hosts many helpful tutorials, including how to quickly model a scene render using a blocking technique in Autodesk Maya.

Vu then used ZBrush’s customizable brushes to shape and sculpt some models in finer detail.

Next, Vu deployed Marmoset Toolbag and baked her models quickly with RTX-acceleration in mere seconds, saving rendering time later in the process.

Vu then shifts gears to lighting where her mentor encouraged her to go big, literally, saying, “Wouldn’t it be cool to do all this bounce lighting in this big, expansive building?”

Here, Vu experimented with lighting techniques that take advantage of several GPU-accelerated features. In Unreal Engine 4.26, RTX-accelerated ray tracing and NVIDIA DLSS, powered by AI and Tensor Cores, make scene refinement simpler and faster. With the release of Unreal Engine 5, Vu then tried Lumen, UE5’s fully dynamic global illumination system, which gives her the ability to light her scene in stunning detail.

Composition is a key part of the process, noted Vu, “When building a composition, you really want to look into the natural lines of architecture that lead your eye to a focal point.”

Normally Vu would apply her hand-painted texture style to the finished model, but as she continued to refine the scene, it made more and more sense to lean into realistic visuals, especially with RTX GPU hardware to support her creative ambition.

“It’s actually really weird, because I think I was stuck in the process for a while where I had lighting set up, the camera set up, the models were done except for textures,” said Vu. “For me that was hard, because I am from a hand-painted background and switching textures was nerve wracking.”

Applying realistic textures and precise lighting brings the Royal Library to life.

Vu created her textures in Adobe Photoshop and then used Substance 3D Painter to apply various colors and materials directly to her 3D models. NVIDIA RTX and NVIDIA Iray technology in the viewport enable Vu to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by her GPU.

Vu returns to Unreal Engine 5 to animate the scene using the Sequencer feature. The sparkly effect comes from a godray, amplified by particle effects, combined with atmospheric fog to fill the room.


All that’s left are final renders. Vu renders her full-fidelity scene in lightning speed with UE5’s RTX-accelerated Path Tracer.

At last, the Royal Library is ready for visitors, friends and distinguished guests.

Vu, proud to have finally completed Royal Library, reflected on her creative journey, saying, “In the last stretch, I said, ‘I actually know how to do this.’ Once again I was in my head thinking I couldn’t do something, but it was freeing and it’s the type of thing where I learned so much for my next one. I know I can do a lot more a lot quicker, because I know how to do it and I can keep practicing, so I can get to the quality I want.”

NVIDIA Studio exists to unlock creative potential. It provides the resources, innovation and know-how to assist passionate content creators, like Vu.

3D environment artist Jacinta Vu is on ArtStation and Twitter.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Environment Artist Jacinta Vu Sets the Scene ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

The Data Center’s Traffic Cop: AI Clears Digital Gridlock

Gal Dalal wants to ease the commute for those who work from home — or the office.

The senior research scientist at NVIDIA, who is part of a 10-person lab in Israel, is using AI to reduce congestion on computer networks.

For laptop jockeys, a spinning circle of death — or worse, a frozen cursor — is as bad as a sea of red lights on the highway. Like rush hour, it’s caused by a flood of travelers angling to get somewhere fast, crowding and sometimes colliding on the way.

AI at the Intersection

Networks use congestion control to manage digital traffic. It’s basically a set of rules embedded into network adapters and switches, but as the number of users on networks grows their conflicts can become too complex to anticipate.

AI promises to be a better traffic cop because it can see and respond to patterns as they develop. That’s why Dalal is among many researchers around the world looking for ways to make networks smarter with reinforcement learning, a type of AI that rewards models when they find good solutions.

But until now, no one’s come up with a practical approach for several reasons.

Racing the Clock

Networks need to be both fast and fair so no request gets left behind. That’s a tough balancing act when no one driver on the digital road can see the entire, ever-changing map of other drivers and their intended destinations.

And it’s a race against the clock. To be effective, networks need to respond to situations in about a microsecond, that’s one-millionth of a second.

To smooth traffic, the NVIDIA team created new  reinforcement learning techniques inspired by state-of-the-art computer game AI and adapted them to the networking problem.

Part of their breakthrough, described in a 2021 paper, was coming up with an algorithm and a corresponding reward function for a balanced network based only on local information available to individual network streams. The algorithm enabled the team to create, train and run an AI model on their NVIDIA DGX system.

A Wow Factor

Dalal recalls the meeting where a fellow Nvidian, Chen Tessler, showed the first chart plotting the model’s results on a simulated InfiniBand data center network.

“We were like, wow, ok, it works very nicely,” said Dalal, who wrote his Ph.D. thesis on reinforcement learning at Technion, Israel’s prestigious technical university.

“What was especially gratifying was we trained the model on just 32 network flows, and it nicely generalized what it learned to manage more than 8,000 flows with all sorts of intricate situations, so the machine was doing a much better job than preset rules,” he added.

Reinforcement learning (purple) outperformed all rule-based congestion control algorithms in NVIDIA’s tests.

In fact, the algorithm delivered at least 1.5x better throughput and 4x lower latency than the best rule-based technique.

Since the paper’s release, the work’s won praise as a real-world application that shows the potential of reinforcement learning.

Processing AI in the Network

The next big step, still a work in progress, is to design a version of the AI model that can run at microsecond speeds using the limited compute and memory resources in the network. Dalal described two paths forward.

His team is collaborating with the engineers designing NVIDIA BlueField DPUs to optimize the AI models for future hardware. BlueField DPUs aim to run inside the network an expanding set of communications jobs, offloading tasks from overburdened CPUs.

Separately, Dalal’s team is distilling the essence of its AI model into a machine learning technique called boosting trees, a series of yes/no decisions that’s nearly as smart but much simpler to run. The team aims to present its work later this year in a form that could be immediately adopted to ease network traffic.

A Timely Traffic Solution

To date, Dalal has applied reinforcement learning to everything from autonomous vehicles to data center cooling and chip design. When NVIDIA acquired Mellanox in April 2020, the NVIDIA Israel researcher started collaborating with his new colleagues in the nearby networking group.

“It made sense to apply our AI algorithms to the work of their congestion control teams, and now, two years later, the research is more mature,” he said.

It’s good timing. Recent reports of double-digit increases in Israel’s car traffic since pre-pandemic times could encourage more people to work from home, driving up network congestion.

Luckily, an AI traffic cop is on the way.

The post The Data Center’s Traffic Cop: AI Clears Digital Gridlock appeared first on NVIDIA Blog.

A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin

JIDU Auto sees a brilliant future ahead for intelligent electric vehicles.

The EV startup, backed by tech titan Baidu, took the wraps off the Robo-01 concept vehicle last week during its virtual ROBODAY event. The robot-inspired, software-defined vehicle features cutting-edge AI capabilities powered by the high-performance NVIDIA DRIVE Orin compute platform.

The sleek compact SUV provides a glimpse of JIDU’s upcoming lineup. It’s capable of level 4 autonomous driving, safely operating at highway speeds, on busy urban roads and performing driverless valet parking.

The Robo-01 also showcases a myriad of design innovations, including a retractable yoke steering wheel that folds under the dashboard during autonomous driving mode, as well as lidar sensors that extend and retract from the hood. It features human-like interactive capabilities between passengers and the vehicle’s in-cabin AI using perception and voice recognition.

JIDU is slated to launch a limited production version of the robocar later this year.

Continuous Innovation

A defining feature of the Robo-01 concept is its ability to improve by adding new intelligent capabilities throughout the life of the vehicle.

These updates are delivered over the air, which requires a software-defined vehicle architecture built on high-performance AI compute. The Robo-01 has two NVIDIA DRIVE Orin  systems-on-a-chip (SoC) at the core of its centralized computer system, which provide ample compute for autonomous driving and AI features, with headroom to add new capabilities.

DRIVE Orin is a highly advanced autonomous vehicle processor. This supercomputer on a chip is capable of delivering up to 254 trillion operations per second (TOPS) to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while meeting systematic safety standards such as ISO 26262 ASIL-D.

The two DRIVE Orin SoCs at the center of JIDU vehicles will deliver more than 500 TOPS of performance to achieve the redundancy and diversity necessary for autonomous operation and in-cabin AI features.

Even More in Store

JIDU will begin taking orders in 2023 for the production version of the Robo-01, with deliveries scheduled for 2024.

The automaker plans to unveil the design of its second production model at this year’s Guangzhou Auto Show in November.

Jam-packed with intelligent features and room to add even more, the Robo-01 shows the incredible possibilities that future electric vehicles can achieve with a centralized, software-defined AI architecture.

The post A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

All-In-One Financial Services? Vietnam’s MoMo Has a Super-App for That

For younger generations, paper bills, loan forms and even cash might as well be in a museum. Smartphones in hand, their financial services largely take place online.

The financial-technology companies that serve them are in a race to develop AI that can make sense of the vast amount of data the companies collect — both to provide better customer service and to improve their own backend operations.

Vietnam-based fintech company MoMo has developed a super-app that includes payment and financial transaction processing in one self-contained online commerce platform. The convenience of this all-in-one mobile platform has already attracted over 30 million users in Vietnam.

To improve the efficiency of the platform’s chatbots, know-your-customer (eKYC) systems and recommendation engines, MoMo uses NVIDIA GPUs running in Google Cloud. It uses NVIDIA DGX systems for training and batch processing.

In just a few months, MoMo has achieved impressive results in speeding development of solutions that are more robust and easy to scale. Using NVIDIA GPUs for eKYC inference tasks has resulted in a 10x speedup compared to using CPU, the company says. For the MoMo Face Payment service, using TensorRT has reduced training and inference time by 10x.

AI Offers a Different Perspective

Tuan Trinh, director of data science at MoMo, describes his company’s use of AI as a way to get a different perspective on its business. One such project processes vast amounts of data and turns it into computerized visuals or graphs that can then be analyzed to improve connectivity between users in the app.

MoMo developed its own AI algorithm that uses over a billion data points to direct recommendations of additional services and products to its customers. These offerings help maintain a line of communication with the company’s user base that helps boost engagement and conversion.

The company also deploys a recommendation box on the home screen of its super-app. This caused its click-through rate to improve dramatically as the AI prompts customers with useful recommendations and keeps them engaged.

With AI, MoMo says it can process the habits of 10 million active users over the course of the last 30-60 days to train its predictive models. In addition, NVIDIA Triton Inference Server helps unify the serving flows for recommendation engines, which significantly reduces the effort to deploy AI applications in production environments. In addition, TensorRT has contributed to 3x performance improvement of MoMo’s payment services AI model inference, boosting the customer experience.

Chatbots Advance the Conversation

MoMo’s will use AI-powered chatbots to allow it to scale up faster when accommodating and engaging with users. Chatbot services are especially effective on mobile device apps, which tend to be popular with younger users, who often prefer them over making phone calls to customer service.

Chatbot users can inquire about a product and get the support they need to evaluate it before purchasing — all from one interface — which is essential for a super-app like MoMo’s that functions as a one-stop-shop.

The chatbots are also an effective vehicle for upselling or suggesting additional services, MoMo says. When combined with machine learning, it’s possible to categorize target audiences for different products or services to customize their experience with the app.

AI chatbots have the additional benefit of freeing up MoMo’s customer service team to handle other important tasks.

Better Credit Scoring

Credit history data from all of MoMo’s 30 million-plus users can be applied to models used for risk control of financial services by using AI algorithms. MoMo has applied credit scoring to the lending services within its super-app. Since the company doesn’t solely depend on traditional deep learning for tasks that are less complex, MoMo’s development team has been able to obtain higher accuracy with shorter processing times.

The MoMo app takes less than 2 seconds to make a lending decision but is still able to reduce taking on risky lending targets with more accurate predictions from AI. This helps keep customers from taking on too much debt, and helps MoMo from missing out on potential revenue.

Since AI is capable of processing both structured and unstructured data, it’s able to incorporate information beyond traditional credit scores, like whether customers spend their money on necessities or luxuries, to assess a borrower’s risk more accurately.

Future of AI in Fintech

With fintechs increasingly applying AI to their massive data stores, MoMo’s team predicts the industry will need to evaluate how to do so in a way that keeps user data safe — or risk losing customer loyalty. MoMo already plans to expand its use of graph neural networks and models based on its proven ability to dramatically improve its operations.

The MoMo team also believes that AI could one day make credit scores obsolete. Since AI is able to make decisions based on broader unstructured data, it’s possible to determine loan approval by considering other risks besides a credit score. This would help open up the pool of potential users on fintech apps like MoMo’s to people in underserved and underbanked communities, who may not have credit scores, let alone “good” ones.

With around one in four American adults “underbanked,” which makes it more difficult for them to get a loan or credit card, and more than half of Africa’s population completely “credit invisible,” which refers to people without a bank or a credit score, MoMo believes AI could bring banking access to communities like these and open up a new user base for fintech apps at the same time.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms driving innovation in financial services. 

The post All-In-One Financial Services? Vietnam’s MoMo Has a Super-App for That appeared first on NVIDIA Blog.

Get Your Wish: Genshin Impact Coming to GeForce NOW

Greetings, Traveler.

Prepare for adventure. Genshin Impact, the popular open-world action role-playing game, is leaving limited beta and launching for all GeForce NOW members next week.

Gamers can get their game on today with the six total games joining the GeForce NOW library.

As announced last week, Warhammer 40,000: Darktide is coming to the cloud at launch — with GeForce technology. This September, members will be able to leap thousands of years into the future to the time of the Space Marines, streaming on GeForce NOW with NVIDIA DLSS and more.

Plus, the 2.0.41 GeForce NOW app update brings a highly requested feature: in-stream copy-and-paste support from the clipboard while streaming from the PC and Mac apps — so there’s no need to enter a long, complex password for the digital store. Get to your games even faster with this new capability.

GeForce NOW is also giving mobile gamers more options by bringing the perks of RTX 3080 memberships and PC gaming at 120 frames per second to all devices with support for 120Hz phones. The capability is rolling out in the coming weeks.

Take a Trip to Teyvat

After the success of a limited beta and receiving great feedback from members, Genshin Impact is coming next week to everyone streaming on GeForce NOW.

Embark on a journey as a traveler from another world, stranded in the fantastic land of Teyvat. Search for your missing sibling in a vast continent made up of seven nations. Master the art of elemental combat and build a dream team of over 40 uniquely skilled playable characters – like the newest additions of Yelan and Kuki Shinobu – each with their own rich stories, personalities and combat styles.

Experience the immersive campaign, dive deep into rich quests alongside iconic characters and complete daily challenges. Charge head-on into battles solo or invite friends to join the adventures. The world is constantly expanding, so bring it wherever you go across devices, streaming soon to underpowered PCs, Macs and Chromebooks on GeForce NOW.

RTX 3080 members can level up their gaming for the best experience by streaming in 4K resolution and 60 frames per second on the PC and Mac apps.

Let the Gaming Commence

All of the action this GFN Thursday kicks off with six new games arriving on the cloud. Members can also gear up for Rainbow Six Siege Year 7 Season 2.

Get ready for a new Operator, Team Deathmatch map and more in “Rainbow Six Siege” Year 7 Season 2.

Members can look for the following streaming this week:

Chivalry 2 (New release on Steam
Starship Troopers – Terran Command (New release on Steam and Epic Games Store)
Builder Simulator (Steam)
Supraland (Free on Epic Games Store)
The Legend of Heroes: Trails of Cold Steel II (Steam)
POSTAL: Brain Damaged (Steam)

Finally, members still have a chance to stream the PC Building Simulator 2 open beta before it ends on Monday, June 20. Experience deeper simulation, an upgraded career mode and powerful new customization features to bring your ultimate PC to life.

To start your weekend gaming adventures, we’ve got a question. Let us know your thoughts on Twitter or in the comments below.

What are there more of in video games?

NPCs or Quests?

— NVIDIA GeForce NOW (@NVIDIAGFN) June 15, 2022

The post Get Your Wish: Genshin Impact Coming to GeForce NOW appeared first on NVIDIA Blog.

AI for Personalized Health: Startup Advances Precision Medicine for COVID-19, Chronic Diseases

At a time when much about COVID-19 remained a mystery, U.K.-based PrecisionLife used AI and combinatorial analytics to discover new genes associated with severe symptoms and hospitalizations for patients.

The techbio company’s study, published in June 2020, pinpoints 68 novel genes associated with individuals who experienced severe disease from the virus. Over 70 percent of these targets have since been independently validated in global scientific literature as genetic risk factors for severe COVID-19 symptoms.

The startup was able to perform this early and accurate analysis using the first small COVID-19 patient dataset reported in the UK Biobank, with the help of AI, trained on NVIDIA A40 GPUs and backed by CUDA software libraries. PrecisionLife’s combinatorial analytics approach identifies interactions between genetic variants and other clinical or epidemiological factors in patients.

Results are shown in the featured image above, which depicts the disease architecture stratification of a severe COVID-19 patient population at the pandemic’s outset. Colors represent patient subgroups. Circles represent disease-associated genetic variants. And lines represent co-associated variants.

PrecisionLife technology helps researchers better understand complex disease biology at a population and personal level. Beyond COVID-19, the PrecisionLife analytics platform has been used to identify targets for precision medicine for more than 30 chronic diseases, including type 2 diabetes and ALS.

The company is a member of NVIDIA Inception, a free program that supports startups revolutionizing industries with cutting-edge technology.

Unique Disease Findings

Precision medicine considers an individual’s genetics, environment and lifestyle when selecting the treatment that could work best for them. PrecisionLife focuses on identifying how combinations of such factors impact chronic diseases.

The PrecisionLife platform enables a deeper understanding of the biology that leads to chronic disease across subgroups of patients. It uses combinatorial analytics to draw insights from the genomics and clinical history of patients — pulled from datasets provided by national biobanks, research consortia, patient charities and more.

Due to the inherent heterogeneity of chronic diseases, patients with the same diagnosis don’t necessarily experience the same causes, trajectories or treatments of disease.

The PrecisionLife platform identifies subgroups — within large patient populations — that have matching disease drivers, disease progression and treatment response. This can help researchers to select the right targets for drug development, treatments for individuals, as well as patients for clinical trials.

“Chronic disease is a complex space — a multi-genetic, multi-environmental problem with multiple patient subgroups,” said Mark Strivens, chief technology officer at PrecisionLife. “We work on technology to tackle problems that previous techniques couldn’t solve, and our unique disease findings will lead to a different set of therapeutic opportunities to best treat individuals.”

PrecisionLife technology is different from traditional analytical methods, like genome-wide association studies, which work best when single genetic variants are responsible for most of the disease risk. Instead, PrecisionLife offers combinatorial analytics, discovering significant combinations of multiple genetic and environmental factors.

The PrecisionLife platform can analyze data from 100,000 patients in just hours using NVIDIA A40 GPUs, a previously impossible feat, according to Strivens.

Plus, being a member of NVIDIA Inception gives the PrecisionLife team access to technical resources, hardware discounts and go-to-market support.

“Inception gives us access to technical expertise and connects us with other data-driven organizations that are a part of NVIDIA’s biotechnology AI ecosystem,” Strivens said. “Training from the NVIDIA Deep Learning Institute reduces the time it takes for our team members to ramp up learning a specific branch of programming.”

As a part of the groundbreaking U.K. life sciences community, PrecisionLife has access to a hub of healthcare innovation and specialist talent, Strivens said. Looking forward, the company plans to deliver new disease insights based on combinatorial analytics all across the globe.

Learn more about PrecisionLife and apply to join NVIDIA Inception.

Subscribe to NVIDIA healthcare news.

The post AI for Personalized Health: Startup Advances Precision Medicine for COVID-19, Chronic Diseases appeared first on NVIDIA Blog.

Smart Utility Vehicle: NIO ES7 Redefines Category with Intelligent, Versatile EV Powered by NVIDIA DRIVE Orin

Accounting for nearly half of global vehicle sales in 2021, SUVs have grown in popularity given their versatility. Now, NIO aims to amp up the volume further.

This week, the electric automaker unveiled the ES7 SUV, purpose-built for the intelligent vehicle era. Its sporty yet elegant body houses an array of cutting-edge technology, including the Adam autonomous driving supercomputer, powered by NVIDIA DRIVE Orin.

SUVs gained a foothold among consumers in the late 1990s as useful haulers for people and cargo. As powertrain and design technology developed, the category has flourished, with some automakers converting their fleets to mostly SUVs and trucks.

With the ES7, NIO is adding even more to the SUV category, packing it with plenty of features to please any driver.

The intelligent EV sports 10 driving modes, in addition to autonomous capabilities that will gradually cover expressways, urban areas, parking, and battery swapping. It also includes a camping mode that maintains a comfortable cabin temperature with lower power consumption and immersive audio and lighting.

Utility Meets Technology

The technology inside the ES7 is the core of what makes it a category-transforming vehicle.

The SUV is the first to incorporate NIO’s watchtower sensor design, combining 33 high-performance lidars, radars, cameras and ultrasonics arranged in around the vehicle. Data from these sensors is fused and processed by the centralized Adam supercomputer for robust surround perception.

With more than 1,000 trillion operations per second (TOPS) of performance provided by four DRIVE Orin systems-on-a-chip (SoCs), Adam can power a wide range of intelligent features in addition to perception, with enough headroom to add new capabilities over the air.

Using multiple SoCs, Adam integrates the redundancy and diversity necessary for safe autonomous operation. The first two SoCs process the 8 gigabytes of data produced every second by the vehicle’s sensor set.

The third Orin serves as a backup to ensure the system can operate safely in any situation. And the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on individual user preferences.

With high-performance compute at its center, the ES7 delivers everything an SUV customer could need, and more.

A Growing Lineup

The ES7 joins the ET7 and ET5 as the third NIO vehicle built on the DRIVE Orin-powered Adam supercomputer, adding even greater selection for customers seeking a more intelligent driving experience.

NIO intends to have vehicle offerings in more than two dozen countries and regions by 2025 to bring one of the most advanced AI platforms to more customers.

Preorders for the ES7 SUV are now on the NIO app, with deliveries slated to begin in August.

The post Smart Utility Vehicle: NIO ES7 Redefines Category with Intelligent, Versatile EV Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

The King’s Swedish: AI Rewrites the Book in Scandinavia

If the King of Sweden wants help drafting his annual Christmas speech this year, he could ask the same AI model that’s available to his 10 million subjects.

As a test, researchers prompted the model, called GPT-SW3, to draft one of the royal messages, and it did a pretty good job, according to Magnus Sahlgren, who heads research in natural language understanding at AI Sweden, a consortium kickstarting the country’s journey into the machine learning era.

“Later, our minister of digitalization visited us and asked the model to generate arguments for political positions and it came up with some really clever ones — and he intuitively understood how to prompt the model to generate good text,” Sahlgren said.

Early successes inspired work on an even larger and more powerful version of the language model they hope will serve any citizen, company or government agency in Scandinavia.

A Multilingual Model

The current version packs 3.6 billion parameters and is smart enough to do a few cool things in Swedish. Sahlgren’s team aims to train a state-of-the-art model with a whopping 175 billion parameters that can handle all sorts of language tasks in the Nordic languages of Swedish, Danish, Norwegian and, it hopes, Icelandic, too.

For example, a startup can use it to automatically generate product descriptions for an e-commerce website given only the products’ names. Government agencies can use it to quickly classify and route questions from citizens.

Companies can ask it to rapidly summarize reports so they can react fast. Hospitals can run distilled versions of the model privately on their own systems to improve patient care.

“It’s a foundational model we will provide as a service for whatever tasks people want to solve,” said Sahlgren, who’s been working at the intersection of language and machine learning since he earned his Ph.D. in computational linguistics in 2006.

Permission to Speak Freely

It’s a capability increasingly seen as a strategic asset, a keystone of digital sovereignty in a world that speaks thousands of languages across nearly 200 countries.

Most language services today focus on Chinese or English, the world’s two most-spoken tongues. They’re typically created in China or the U.S., and they aren’t free.

“It’s important for us to have models built in Sweden for Sweden,” Sahlgren said.

Small Team, Super System

“We’re a small country and a core team of about six people, yet we can build a state-of-the-art resource like this for people to use,” he added.

That’s because Sweden has a powerful engine in BerzeLiUs, a 300-petaflops AI supercomputer at Linköping University. It trained the initial GPT-SW3 model using just 16 of the 60 nodes in the NVIDIA DGX SuperPOD.

The next model may exercise all the system’s nodes. Such super-sized jobs require super software like the NVIDIA NeMo Megatron framework.

“It lets us scale our training up to the full supercomputer, and we’ve been lucky enough to have access to experts in the NeMo development team — without NVIDIA it would have been so much more complicated to come this far,” he said.

A Workflow for Any Language

NVIDIA’s engineers created a recipe based on NeMo and an emerging process called p-tuning that optimizes massive models fast, and it’s geared to work with any language.

In one early test, a model nearly doubled its accuracy after NVIDIA engineers applied the techniques.

Magnus Sahlgren

What’s more, it requires one-tenth the data, slashing the need for tens of thousands of hand-labeled records. That opens the door for users to fine-tune a model with the relatively small, industry-specific datasets they have at hand.

“We hope to inspire a lot of entrepreneurship in industry, startups and the public using our technology to develop their own apps and services,” said Sahlgren.

Writing the Next Chapter

Meanwhile, NVIDIA’s developers are already working on ways to make the enabling software better.

One test shows great promise for training new capabilities using widely available English datasets into models designed for any language. In another effort, they’re using the p-tuning techniques in inference jobs so models can learn on the fly.

Zenodia Charpy, a senior solutions architect at NVIDIA based in Gothenburg, shares the enthusiasm of the AI Sweden team she supports. “We’ve only just begun trying new and better methods to tackle these large language challenges — there’s much more to come,” she said.

The GPT-SW3 model will be made available by the end of year via an early access program. To apply, contact francisca.hoyer@ai.se.

The post The King’s Swedish: AI Rewrites the Book in Scandinavia appeared first on NVIDIA Blog.

Generated by Feedzy