Quick Guide to Data Collection in 2022

Digital solutions such as AI and ML are being implemented in every industry around the world. To successfully implement and leverage these technologies, organizations need to collect and harvest large…

The post Quick Guide to Data Collection in 2022 first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

Top 6 Data Collection Best Practices in 2022

The line between high-performing and erroneous AI/ML models is created by the quality of their training data. While some organizations have a minefield of data they have been generating throughout…

The post Top 6 Data Collection Best Practices in 2022 first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

Unethical Use of AI Being Mainstreamed by Some Business Execs, Survey Finds 

By John P. Desmond, AI Trends Editor 

In a recent survey, senior business executives admitted to their sometimes unethical use of AI. 

The admission of being openly unethical came from respondents to a recent survey conducted by KPMG of 250 director-level or higher executives at companies with more than 1,000 employees about data privacy.  

Some 29% of the respondents admitted that their own companies collect personal information that is “sometimes unethical” and 33% said consumers should be concerned about how their company uses personal data, according to a recent report in The New Yorker. 

Orson Lucas, principal, US privacy services team, KPMG

The result surprised the survey-taker. “For some companies, there may be a misalignment between what they say they are doing on data privacy and what they are actually doing,” stated Orson Lucas, the principal in KPMG’s US privacy services team.  

One growing practice is a move to “collect everything” about a person, then figure out later how to use it. This approach is seen as an opportunity to better understand what customers want to get out of the business that can later result in a transparent negotiation about what information customers are willing to provide and for how long.   

Most of these companies have not yet reached the transparent negotiation stage. Some 70% of the executives interviewed said their companies had increased the amount of personal information they collected in the past year. And 62% said their company should be doing more to strengthen data protection measures.   

KPMG also surveyed 2,000 adults in the general population on data privacy, finding that 40% did not trust companies to behave ethically with their personal information. In Lucas’ view, consumers will want to punish a business that demonstrates unfair practices around the use of personal data.   

AI Conferences Considering Wider Ethical Reviews of Submitted Papers  

Meanwhile, at AI conferences, sometimes AI technology is on display with little sensitivity to its potentially unethical use, and at times, this AI tech finds its way into commercial products. The IEEE Conference on Computer Vision and Pattern Recognition in 2019, for example, accepted a paper from researchers with MIT’s Computer Science and AI Laboratory on learning a person’s face from audio recordings of that person speaking.  

The goal of the project, called Speech2Face, was to research how much information about a person’s looks could be inferred from the way they speak. The researchers proposed a neural network architecture designed specifically to perform the task of facial reconstruction from audio.   

Stuff hit the fan around it, Alex Hanna, a trans woman and sociologist at Google who studies AI ethics, asked via tweet for the research to stop, calling it “transphobic.” Hanna objected to the way the research sought to tie identity to biology. Debate ensued. Some questioned whether papers submitted to academic-oriented conferences need further ethical review.  

Michael Kearns, a computer scientist at the University of Pennsylvania and a coauthor of the book, The Ethical Algorithm,” stated to The New Yorker that we are in “a little bit of a Manhattan Project moment” for AI and machine learning. “The academic research in the field has been deployed at a massive scale on society,” he stated. “With that comes this higher responsibility.”  

Katherine Heller, computer scientist, Duke University

A paper on Speech2Face was accepted in the 2019 Neural Information Processing Systems (Neurips) Conference held in Vancouver, Canada. Katherine Heller, a computer scientist at Duke University and a Neurips co-chair for diversity and inclusion, told The New Yorker that the conference had accepted some 1,400 papers that year, and she could not recall facing comparable pushback on the subject of ethics. “It’s new territory,” she stated. 

For Neurips 2020, held remotely in December 2020, papers faced rejection if the research was found to pose a threat to society. Iason Gabriel, a research scientist at Google DeepMind in London, who is among the leadership of the conference’s ethics review process, said the change was needed to help AI “make progress as a field.” 

Ethics is somewhat new territory for computer science. Whereas biologists, psychologists, and anthropologists are used to reviews that query the ethics of their research, computer scientists have not been raised that way. The focus is more around methods, such as plagiarism and conflicts of interest.    

That said, a number of groups interested in the ethical use of AI have come about in the last several years. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction, for example, launched a working group in 2016 that is now an ethics research committee that offers to review papers at the request of conference program chairs. In 2019, the group received 10 inquiries, primarily around research methods.   

“Increasingly, we do see, especially in the AI space, more and more questions of, Should this kind of research even be a thing?” stated Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, to The New Yorker. 

Shilton identified four categories of potentially unethical impact. First, AI that can be “weaponized” against populations, such as facial recognition, location tracking, and surveillance. Second, technologies such as Speech2Face that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, automated weapons research. Fourth, tools used to create alternate sets of reality, such as fake news, voices or images.  

This green field territory is a venture into the unknown. Computer scientists usually have good technical knowledge, “But lots and lots of folks in computer science have not been trained in research ethics,” Shilton stated, noting that it is not easy to say that a line of research should not exist. 

Location Data Weaponized for Catholic Priest 

The weaponization of location-tracking technology was amply demonstrated in the recent experience of the Catholic priest who was outed as a Grindr dating app user, and who subsequently resigned. Catholic priests take a vow of celibacy, which would be in conflict with being in a dating app community of any kind.   

The incident raised a panoply of ethical issues. The story was broken by a Catholic news outlet called the Pillar, which had somehow obtained “app data signals from the location-based hookup app Grindr,” stated an account in recode from Vox. It was not clear how the publication obtained the location data other than to say it was from a “data vendor.”  

“The harms caused by location tracking are real and can have a lasting impact far into the future,” stated Sean O’Brien, principal researcher at ExpressVPN’s Digital Security Lab, to recode. “There is no meaningful oversight of smartphone surveillance, and the privacy abuse we saw in this case is enabled by a profitable and booming industry.”  

One data vendor in this business is X-Mode, which collects data from millions of users across hundreds of apps. The company was kicked off the Apple and Google platforms last year over its national security work with the US government, according to an account in The Wall Street Journal. However, the company is being acquired by Digital Envoy, Inc. of Atlanta, and will be rebranded as Outlogic. It’s chief executive, Joshua Anton, will join Digital Envoy as chief strategy officer. The purchase price was not disclosed. 

Acquiring X-Mode “allows us to further enhance our offering related to cybersecurity, AI, fraud and rights management,” stated Digital Envoy CEO Jerrod Stoller. “It allows us to innovate in the space by looking at new solutions leveraging both data sets. And it also brings new clients and new markets.”   

Digital Envoy specializes in collecting and providing to its customers data on internet users based on the IP address assigned to them by their ISP or cell phone carrier. The data can include approximate geolocation and is said to be useful in commercial applications, including advertising.   

X-Mode recently retired a visualization app, called XDK, and has changed practices by adding new guidance on where data is sourced from, according to an account in Technically. This is the second time the company has rebranded since it was founded in 2013, when it started off as Drunk Mode.  

Following the acquisition, Digital Envoy said in a statement that it added a new code of ethics, a data ethics review panel, a sensitive app policy and will be hiring a chief privacy officer. 

Read the source articles and information in The New Yorker, in recode from Vox, in The Wall Street Journal and in Technically. 

Experiences in ML Scaling, ML Project Delivery in Healthcare 

By John P Desmond, AI Trend Editor  

Experiences with AI and machine learning at CVS Health and St. Luke’s Health System in Boise, Idaho, are having practical benefits to the two organizations.  

Nels Lindahl, director of Clinical Decision Systems, CVS Health

CVS Health is learning how to scale AI applications using machine learning, especially through the house of machine learning operations (MLOps) tools, according to Nels Lindahl, director of Clinical Decision Systems, speaking in a virtual session at the recent Ai4 Conference held virtually recently.  

And St. Luke’s Health Center put a COVID-19 prediction program, a supply chain purchase engine and a demand-based staffing application into initial production using AI and machine learning, said Dr. Justin Smith, senior director of advanced analytics at St. Luke’s, also at a recent Ai4 virtual conference session.   

“We are at an MLOps tipping point, where ML has a growing production footprint, with adoption picking up pace and awareness and understanding at an all-time high,” stated Lindahl. “ML tech can now deliver; people are seeing real use cases in the wild and having them grow; it’s real.”   

The three primary “ecosystems” for building out an AI footprint are Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP), he said. Developers are using open source tools and some they develop on their own to deliver value to their organizations.   

He highly recommended having an ML strategy, with executive sponsor, defined budget resources, and a repeatable process to ensure the same approach can be replicated throughout an enterprise. “You need a narrative that helps you push things forward. When you can deliver on a use case consistently, you want to be ready to go,” he said. 

In his professional role today, Lindahl is directing IT application development. He has delivered several complex process automation projects within the clinical decision space of pharmacy benefit management.   

Tracking Popular API Services on GitHub 

In April, he was tracking 19 external API services on axes of scale versus maturity, with for example AWS Enterprise Search and GCP Vision API rated high on the scale and maturity axes, so in the upper right quadrant. He recently updated his chart and is now tracking 49 external services. “The number of APIs out there is growing every day,” he stated.  

The external services are becoming more specific, so that instead of one large vision API, it is breaking down into more specializations. “The exact thing you want to do is probably closer to something you can put into production from your API, ” he said. “The AI is out there and ready to go.”  

He emphasized again the need for a development organization to have an overall ML strategy. “Just because you can go out and get an API, does not mean that is the right thing to do for your organization. It might be cool technology that is amazing, but there might be no return on that investment for your customer,” he said, adding, “Part of your ML strategy must be about purpose, replication, and reuse. Those are going to be at the heart of getting value back for the organization.”   

He recommended looking at the GitHub Trending page, to see the popularity of different services. He tracks them over time, using indicators such as “stars,” which means people who have indicated a continuing interest, like a bookmark. In April, for example, TensorFlow registered over 154,000 stars and Pytorch was at 47,000; those attracted the most interest.   

Lindahl is also tracking MLOps: machine learning operations tools that support practices to maintain ML models in production reliably. His tracking of MLOps on GitHub shows 10,000 stars for Kubeflow, a machine learning toolkit for Kubernetes,  which is an open source container orchestration platform. What surprised Lindahl more was the rate of growth for MLReef, an open source MLOps platform with a focus on collaboration. “The number of people who downloaded it and are actively using it is going up really fast,” he said. He is researching it to see if he can understand what is driving the increased use.  

Three ML Projects at St. Luke’s Delivering Business Value in New Ways  

Justin Smith, senior director of advanced analytics, St. Luke’s Health System, Boise, Idaho

The emphasis of Dr. Smith at St. Luke’s was more on current delivered projects, and less on their technical underpinnings. The St. Luke’s complex has eight medical centers serving about one million patients. Dr. Smith described three projects. 

The first was an application to predict the rate of spread of COVID-19, so that the hospital could do some better planning. “We were receiving all kinds of wild forecasts for what to expect,” he said. “We wanted to know if we could predict how many patients we would have in our ICU and on general hospital floors.”  

The difficulty was that the forecasts were so different. Idaho had not experienced a wave of infections that happened in New York, for example. The team decided to document what they did know, then produce from that data two sets of forecasts: a short-term forecast for one or two weeks, and a medium-term forecast of up to 30 days. They used a new technique, XGBoost, an open source software library popular in applied machine learning for structuring tabular data. It implements gradient-boosted decision trees, which builds a regression tree in a stepwise fashion, measuring the error of each step and correcting it in the next.    

“It got us to within five or six patients a month out,” Dr Smith said, using variables including the inpatient census and the positivity rate from their own health system. “We controlled 90% of the testing, so we had good data on positivity rates,” Dr. Smith said.  “We showed with strong accuracy whether we would be increasing or decreasing the patient census.”   

For the supply chain, the team was asked if they could develop a “purchasing engine” that could achieve savings with more optimal purchasing from the 100 vendors and 400,000 products in the supply chain. Certain volumes of purchase qualify for better pricing, posing an optimization problem. “With such levels of complexity, it’s too large for a human. You can’t solve it on Excel,” Dr. Smith said.   

Breaking Analyst Habit of Sticking with Familiar Vendors  

An examination of the practices of hospital analysts involved in purchasing, showed that many operated in familiar territory, often choosing vendors they knew and had dealt with in the past. Using advanced analytics, the team wrote algorithms that generated “billions” of scenarios. Providing a view of that data was challenging. “We don’t want to show everything or too little either,” Dr. Smith said, The team took the approach of eliminating the non-viable scenarios, sticking to spending within existing agreements, and to spend the required volume with each vendor to get the best price.   

The team also tried to minimize the number of transitions, in which a product would be purchased from a different vendor. The analysis allowed the team to select opportunities representing the lowest number of transitions and the highest potential for savings across the entire health system. “It’s very powerful,” Dr. Smith said. “It’s being rolled out across the enterprise.”   

The new system will require renegotiated contracts with a number of suppliers, so Dr. Smith expects it will take several years to become fully implemented.   

The third project was around demand-based staffing. “We don’t want to be overstaffed and we don’t want to be understaffed,” he said. “We want to optimize our staffing to match historical patient demands.” While COVID did “wild things” to the patient census in 2020, that data was included. The system tried to avoid high labor costs associated with “on-call” roles, and also having to send medical professionals home when demand is low.   

“The solution we created, which is still rolling out, is based on algorithms that offer an optimized schedule that matches staff and census according to predicted demand based on the last three years of data, including 2020,” he said. The system was focused on nurses and their support staff to start. Using a visual picture of the forecasts via Power BI, the business analytics service from Microsoft, human professionals were able to adjust the recommended staffing based on “front-line knowledge.” That would be, for example, knowing there is a football game, a music festival and a rodeo all in the same weekend, something the machine learning algorithm might not pick up.   

“The output might be that we need six RNs to cover the 7 am to 7 pm shift, but just four for the 9 am to 5 pm shift,” Dr. Smith said. The system is beginning to be deployed and can project weekly staffing levels needed for four to six weeks out, he said. “It’s very dynamic, and senior executives can look and see how we are doing in hospital staff,” Dr. Smith said.  

Learn more at the recent Ai4 Conference, at the GitHub Trending page, at Nels Lindahl and at Dr. Justin Smith. 

5G and AI Combine to Advance the Capabilities of Drones 

By AI Trends Staff  

What do you get when you combine 5G and AI with advanced drone development?  

One answer is from Qualcomm’s recent launch of its Flight RB5 5G Platform, a reference drone containing computing at lower power with AI, 5G, and long-range Wi-Fi 6 connectivity.   

According to the company in a press release, the drone and reference design contains “enhanced autonomy and intelligence features” powered by the Qualcomm QRB5165 processor. Announced in June 2020, the QRB5165 processor is customized for robotics applications and is coupled with the Qualcomm AI Engine, which delivers 15 trillion operations per second (Tops) of AI performance. This allows it to run complex AI and deep learning workloads, and on-device machine learning and accurate edge inferencing while using lower power, according to a Qualcomm product description. 

The processor also offers an image signal processor (ISP) that supports seven concurrent cameras, a dedicated computer vision engine for enhanced video analytics (EVA), as well as the Qualcomm Hexagon Tensor Accelerator (HTA).  

The WiFi 6 is capable of a maximum throughput speed of 9.6 Gps, compared to 3.5 Gps on WiFi 5 and 600Mbps on WiFi4. The 5G network enables connections with machines, objects, and devices at speeds up to 100 times faster than 4G. Downloading a high-definition film over a 4G network, for example, can take 50 minutes on average, while on 5G, it takes nine minutes, according to a summary from the Thales Group. 

This technology is said to allow the new RB5 5G drone to fly beyond the usual line-of-sight, which Qualcomm said would “support safer and more reliable flights over long distances.” 

According to the company, this new RB5 5G platform and reference design will support new drone use cases in sectors including film and entertainment, security and emergency response, delivery, defense, inspection, and mapping.  

Dev Singh, GM autonomous drones, Qualcomm

Dev Singh, general manager of autonomous robotics, drones and intelligence machines at Qualcomm, stated: “We have continued to engage many leading drone companies, enabling 200+ global robotics and drone ecosystem members in addition to consistently driving and promoting worldwide drone standardization and transformative 5G capabilities.” 

Qualcomm is working with Verizon to complete network testing for the Verizon 5G platform in the US, with the 5G mmWave platform set to be offered through the Verizon Thingspace Marketplace.  

Globally, Qualcomm is offering the reference design for presale now, with the Qualcomm Flight RB5 5G drone development kit expected to be made available in the fourth quarter of 2021.  

5G a ‘Game Changer’ for Autonomous Drones  

“5G is a game changer for autonomous drones,” states a blog post on the website of Modal AI, which partnered with Qualcomm on the Flight RB5 5G platform. Modal AI offers design services and computing platforms for aerial and ground autonomous navigation systems that communicate on 4G and 5G cellular networks.  

Chad Sweet, CEO, Modal AI

Based in San Diego, the company spun out of Qualcomm in 2018. Cofounder Chad Sweet was lead engineer for Qualcomm’s robotics R&D effort, which led to the creation of drone and robotic products lines.  

5G ensures that a drone’s camera feeds seamlessly updates on the pilot’s ground control system in real time, sending the pilot an accurate view of where the drone is. 

New applications can include 5G-enabled delivery for medical equipment such as COVID tests to affected communities safely by limiting human to human contact, reducing the spread of inspections. Or, search and rescue drones can report data and area visuals in real time at little to no latency, increasing the speed and efficiency of search and rescue missions.  

Investments in 5G and autonomous drones from major companies are on the rise. Uber Eats, Amazon, and UPS have invested in their own fleets of autonomous drones. Network carriers Verizon, AT&T, and T-Mobile are promoting their own private 5G networks and are testing for drone compatibility.  

“Autonomous flight on the 5G network is here now—extending the flight path of drones to increase the value of a broad set of inspection capabilities,” stated Sweet, CEO of ModalAI, in a press release from the company. “Better yet, the platform is open to everyone. Together, ModalAI and Qualcomm Technologies are advancing the adoption and commercialization of drone technology.”   

Work Progressing on Traffic Management for Drones 

New applications for 5G drones include unmanned traffic management systems (UTMs), according to a blog post from UAV Coach, which offers drone training to its community of 75,000 drone pilots, The goal of UTMs is to create a system that can safely integrate drones into the air traffic already flying in low-altitude airspace, so that package delivery will not interfere with helicopter, airplanes, and other drones. UTM is being studied by NASA’s Ames Research Center, working with the FAA and over 100 partners across various industries.   

In 2021, more than 873,000 drones, also called unmanned aircraft systems (UAS), are registered to fly in the US, and the numbers are increasing quickly.   

Verizon has been focusing on the drone opportunity for its business for some time, as evidenced by its acquisition of drone fleet management software company Skyward in February 2017. Mariah Scott, president of Skyward, stated, “We knew early on that connectivity would be critical for drones to truly transform our world. And now 5G Ultra Wideband will usher in a new era in aviation, where we connect and integrate drones into the national airspace.” 

Verizon announced 5G Ultra Wideband in April 2019, and committed to launch the services in 60 cities by the end of last year.  

In an ambitious statement made at CES in 2019, Verizon CEO Hans Vestberg announced the company’s “commitment to be the first to connect 1 million drone flights to the 5G network.”  

T-Mobile covered 270 million people with its low-band 5G network by the end of 2020, while Verizon covered 230 million and AT&T reached 225 million, according to an account in cnet. 

Read the source articles and information in press release and product description from Qualcomm, blog post and press release on the website of ModalAI, in a blog post from UAV Coach and in an account in cnet. 

The Mouse Brain As A Gatekeeper For AI And Autonomous Cars  

By Lance Eliot, the AI Trends Insider  

Mice can be amazingly smart, mightily so. We all would likely agree that mice are quick on their feet and physically agile. On a personal level, I’ve dealt with mice that decided to take up residence at my home and were determined to stay without my permission and without paying any rent. Upon witnessing a mouse running around in my domicile, I rushed over to the local store and got an everyday mousetrap.   

Turns out that the mice avoided the thing and probably got quite a laugh at my having set it up to begin with. Let’s refer to them as mocking mice, in addition to being mighty mice.   

I watched numerous popular online videos claiming to provide a guaranteed-to-work formula to catch a mouse. Those darned clever mice seemed to escape each one.   

I finally threw in the towel and did what my colleagues and friends had suggested all along. I got a cat.   

I don’t know if my beloved cat dispatched the mice directly, or maybe they headed for the hills after realizing that there was a new sheriff in town. As I said, mice can be very cunning.   

Scientists would say that mice are reasonably intelligent beings. They are being analyzed and scrutinized on a daily basis. The assumption is that mice can reveal all sorts of insights about animals in general, and about mammals in particular, and hopefully provide a helpful glimpse into the nature of human brains and human capabilities.  

They seem to have short-term memory and a long-term memory. Mice will figure out more expedient paths of a maze and seek to optimize their self-behavior. They seem to be able to work in teams, making use of various communications to coordinate their joint activities. It is believed that the communications take place via sound, touch, smell, urination, and even by the act of thumping their appendages.   

Mice might be a crucial pathway toward figuring out the human brain. Mice might also be a crucial pathway toward devising Artificial Intelligence (AI) in machines.   

Say what? 

Yes, the more we can identify how brains work, mice or humans, the better chance we would seem to have toward crafting AI systems. If we can somehow reverse engineer brains, we might be able to create AI systems that do more of what brains do. Some believe that we will simply mimic or simulate the wetware of brains. Others indicate that we might not need to do such mimicry and can instead glean insights to devise AI systems that are able to exhibit intelligent behavior, though they might be made in completely different ways than that of wetware brains.   

You can conceive of efforts in AI as racing forward on multiple fronts at once.   

There are those that don’t especially pay attention to the biomedical efforts of brain reverse engineering. They are fine with such work taking place, but the pace and progress have little bearing on their AI efforts. Absent any hard-and-fast tangible indications from the wetware focus, those AI developers and researchers are forging ahead anyway. No need to wait for the bio side to pin down how the brain operates. 

Meanwhile, there are other AI developers and researchers that closely monitor or are directly involved in these wetware examinations. This is then infused into their AI constructions. The AI use of Machine Learning (ML) and Deep Learning (DL) is notably influenced by and tends to use Artificial Neural Networks (ANNs), which are somewhat akin to wetware neural networks, though decidedly less so and not on par (as yet).   

The thing is, there is a huge debate about whether having tons upon tons of something that runs on a machine in a computer-based way that mimics neurons is ever going to arise to intelligent behavior in the manner that we construe intelligence. If we don’t also come to understand how intelligence arises from the neurons and their interconnections in wetware, we might be doomed to having merely a humongous network of computer-based simulated neurons that aren’t particularly overwhelmingly impressive.   

Perhaps we might get AI toward parts of intelligent behavior via these massive ANN’s and then get stuck. The widespread supposition is that the only way to get unstuck will involve decoding how brains give rise to intelligence in the natural world.   

The mighty mouse might be a significant step in that direction.   

Where might a better devised AI be used, namely an AI that might be shaped around what we ultimately learn from the brains of mice? The mouse brain could be one of the cornerstones or keys to achieving AI-based true self-driving cars.   

The future of cars consists of AI-based true self-driving cars. There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: How might the insights from reverse-engineering the brains of those lovable and clever mice somehow stoke the advent of AI-based true self-driving cars?   

I’d like to first further clarify what is meant when I refer to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/ 

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. 

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/  

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/  

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And The Mouse That Roared   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why is this added emphasis about the AI not being sentient? 

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.   

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. 

Let’s dive into the myriad of aspects that come to play on this topic.   

We are considering how mouse brains might be useful for making further progress on achieving AI-based true self-driving cars.   

It would seem apparent that we know everything there is to know about how humans drive. 

The thing is, all of those discernible rules and abundant logic about the driving act are ultimately converted into the human brain. Once the whole kit and caboodle go into your brain, we really do not know what takes place. The brain in terms of turning all that wetware machinations into human thinking is still an abundant mystery. 

In short, if studies of the brains of mice could reveal the innermost secrets of how the brain arises to the task of thinking, we might be able to unlock the same as it pertains to human thought. And, if we did that, we would have a much clearer understanding of what goes through the human mind during the driving chore. Some ardently believe that the vaunted and revered Level 5 will not be achieved unless we can first unpack the inner workings of human thought.   

That being said, I would not want to imply that the glories of finally figuring out how humans think are going to be used simply to garner autonomous vehicles and self-driving cars. You can bet your bottom dollar that a lot more could be achieved with AI that leveraged or exploited the byzantine complexities of the human brain. 

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion 

Start small and make our way to something big, really big.   

We can welcome and seek to squeeze every ounce of insight from the mapping of the mouse brain. A lot more mapping is going to be needed. Be thankful for those gallant mice that contribute to this grand quest.   

A final comment for now. 

The approximate estimated count for the number of human neurons in our brains is around 86 billion. For a mouse it is something akin to 70 million. Recall how I obtained a cat, partially prompted to deal with my mouse problem at home.   

The estimate of the number of neurons in the brain of a cat is around 250 million. On a rough comparison, this means that the cat has perhaps three to four times the number of neurons of a mouse. Please note that you have to be careful making any generalizations by the sheer count of neurons alone, since that’s not the only factor involved.   

But, I’m sure your neighborhood cat thinks it can outsmart those troublesome mice, and the feline is eagerly and earnestly willing to prove to you that by far a cat is heads and shoulders smarter than those annoying mouse freeloaders underfoot. 

The race to figure out how to best attain AI is akin to a cat and mouse gambit that we don’t yet know how it will play out. It is going to be exciting, that’s for darn sure.  

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Manhole Covers Reveal Gaping Holes For AI Autonomous Cars Capabilities 

By Lance Eliot, the AI Trends Insider   

Have you been asked about manhole covers in any of your recent high-tech company job interviews?   

Probably you have.   

Yes, oddly enough, manhole covers have become a common topic for job interviews in the last several years, especially when seeking a job at a high-tech firm. You might assume that the jobs in question were somehow related to producing manhole covers or possibly working on street construction crews. 

Nope.  

The matter of manhole covers was suddenly being brought up in all kinds of job interviews for nearly any kind of job at a high-tech firm. Want to be a programmer at a high-tech firm? Better be ready for those manhole questions. Want to work in finance or marketing at a high-tech firm. Manhole covers are undoubtedly going to come up.   

Here’s the deal.   

In one of those recruitment trends, bringing up the offbeat topic of manhole covers became the hot new means to somehow screen candidates, supposedly separating the creative wheat from the dullard chaff. The interview questions about manhole covers were presumably aimed at eliciting how well someone thinks out-of-the-box or thinks on the spot.   

Let’s start with one of the initial mind-bending questions that gained a lot of traction with interviewers. The interviewer looks at you with a seriously straight face and asks you to estimate how many manhole covers there are in New York City. You are given a moment or two to reflect. The interviewer is watching you closely to see how you cope with this distressingly off-the-wall issue.   

If you’ve not heard the question before, please go ahead and mull over the matter in your mind. Try not to show any outward indication of cold sweat or otherwise tip your hand that you are struggling with the keen query. It would be better for you to appear like a hipster calculating machine and that you are crafting the best possible answer for this altogether immensely fascinating question.   

Of course, you might actually be thinking that it is an esoteric or eccentric question and doesn’t have much to do with the job that you are applying for. Nonetheless, in the quirky nature of how job interviews go, you must not dare to challenge such a question and instead act like it is the best question since the making of sliced bread.   

Since I don’t want you to ever lose a potential juicy job at a high-tech firm simply because of the manhole question, I’ll happily reveal the answer to you.  

Thinking Through the Number of Manhole Covers in New York City   

You are supposed to estimate the number of manhole covers by first mentally conjuring up a vision in your head that dreamily indicates the roadway layout of New York City. In essence, avenues run in one direction (typically north-south), and the streets go in a roughly perpendicular direction (usually east-west). This is a grid-style layout, akin to a matrix. Imagine a spreadsheet that has various rows and columns. 

Next, you are seeing in your mind’s eye that perhaps there is one manhole cover per each of the intersecting points of this matrix. Thus, if you multiply the number of avenues times the number of streets, which is the number of rows and columns of this imaginary matrix, you would then multiply that calculated amount by the estimate of one manhole cover and arrive at your total estimate.   

You are welcome to get tricky and assume that there is less than one manhole cover per intersecting point, ergo multiple the rows by columns result with a fraction. Or you can go the other way and assume that there is more than one manhole cover per intersecting point. In that case, you would be multiplying the number of rows and columns by a number greater than one.   

Voila, you have the means to answer the question. 

Kind of. One big problem with this particular variant of a so-called “clever” question is that not everyone knows that New York City is arranged in a grid-style (those interviewers that do know this bit of arcane trivia are apt to assume everyone must know it). Without that piece of crucial knowledge, you are somewhat sunk on this oddball question. Even if you knew it was in a grid shape, you might not know how many avenues there are, or know how many streets there are.   

You could also certainly argue that this is a huge generalization anyway, since not all of New York City conforms to this shapely grid notion. But you’d be wise to not tilt at windmills, especially during a job interview.   

The usual rationalization for this manhole cover query is that the interviewer is not expecting an arithmetically-derived answer, and instead trying to gauge how you think on your feet. If you were able to sketch out how you would solve this sneaky problem, this was considered a sufficiently evidential way of proving that you are a thinker and a problem solver.   

The retort to that justification is that regrettably many interviewers are deplorable at doing interviews, and as such, they completely distorted the manhole cover conundrum. Some interviewers would eagerly wait for a number, and if you said anything other than providing a single number, you were marked as wrong. For those interviewees that did calculate a number, some interviewers would have the nervy gall to mark you as “wrong” if the number was not an exact match to the number of actual manhole covers in New York City.   

Things got worse from there. 

Creative Thinking Required   

Some savvy interviewees that knew the question might be potentially asked (having been tipped by others that lamentably got crushed by the question), shrewdly memorized the number of actual manhole covers (as reported by the New York City roadway infrastructure agency), and would simply proffer that number. This might allow you to skate through the question. On the other hand, some interviewers would then ask how you derived the number. If you could not tell the inspiring story of having deduced the matter via the grid method, you were sunk for not being creative in your thinking.   

Other manhole cover questions are used in interview settings. For instances, explain why manhole covers are round. 

You see, your whole life you’ve probably seen just round manhole covers and never put much thought into why they are round. Perhaps there is some really good reason for this round shape. It could also simply be a tradition that few have ever sought to overturn.   

Once again, this is a thinker or problem-solver type of question. You are expected to demonstrate that you can take a problem that you’ve not seen before, and turn it on its head, solving it in a few seconds and under the bright lights of a job interview. 

While many do not put much stock in these attempts to catch a job interviewer off-guard, it is a commonplace tactic by interviewers. Indeed, it is sometimes the lazy way to undertake an interview. Merely scrape away at the interviewee with all kinds of problem-solving queries, and you’ve used up that allotted half-hour or hour of the interview. You can write up your notes and make it seem that you really grilled the interviewee, as though that was the purpose of the interview.   

You could assert that manhole covers are round because the manholes are round holes. Duh, what else would make sense to cover them with? (Some interviewers won’t like that answer and will severely ding you for being unduly smarmy.) Or perhaps manhole covers are round so that the effort to put the cover back over the manhole is easily done and there is no struggle with the proper orientation. Round a manhole cover is also readily turned on its edge and rolled to wherever you might want to temporarily store it.   

Manhole Covers are Heavy 

You might find of overall interest that manhole covers are often made from cast iron and weigh around 250 pounds. There are plenty of variations, including manhole covers that weigh around 100 pounds and others that weigh more than 300 pounds. There are also manhole covers that are not round, though this is rare in the United States. The lip or width of a manhole cover is usually around an inch or so, sometimes less and sometimes thicker.

You probably do not give much thought daily to manhole covers. You drive over them all the time, and yet they rarely register in your noggin as something to be noticed. Sure, there might be a construction crew that has removed a manhole cover, for which the roadway is usually blocked with cones and other obstructions to keep you from going over the now-open manhole.   

In theory, you could likely straddle the open manhole with your vehicle and not suffer any damages to your car. Do not do this! A very bad idea. Plus, for all you know, there might be someone down below in the manhole. Think of the danger that a car passing over an open manhole presents to anyone that is within the manhole itself. 

I would bet that nearly all of us would certainly avoid an open manhole, even if there weren’t brightly colored cones blocking it off. The moment that you noticed up ahead that the roadway presented an open manhole, you would either come to a stop beforehand or at least maneuver into another lane to entirely avoid the gaping opening.   

Suppose though that the manhole cover was only slightly ajar.   

This presents a different situation in that you might or might not realize that the manhole cover is not fully seated in its proper spot. Also, consider the speed of your vehicle as a factor. If you are driving fast down a street and the manhole cover is ajar, perhaps you would fail to notice it out of place or spy it only at the last moment, allowing very little reaction time to cope with the unusual circumstance.   

Most of us would say that luckily we’ve never had that kind of circumstance arise. Manhole covers are nearly always sitting in their proper placement over a manhole, or the manhole cover is fully off the manhole and sitting in the street by itself or has been placed on the sidewalk. The odds of encountering a manhole cover that was neither completely implanted and nor completely removed would seem quite remote.   

It happened recently, sadly so.   

There is an unnerving video that was captured by a camera-wielding resident showcasing a recent tragic roadway incident involving a pick-up truck and a manhole cover. The video has been recently posted by the local police within the jurisdiction that the matter occurred. The dreadful incident occurred in Panorama City, California, which is in the city of Los Angeles.   

Apparently, a young man in his 20s was for as yet unknown reasons down in a covered manhole and attempted to come up by propping out and over the manhole cover. Imagine trying to push up a manhole cover from underneath it. That’s a lot of weight to try and displace.   

Just as the manhole cover begins to shift up and slightly over, a pick-up truck comes into view of the video. The pick-up truck runs directly over the manhole cover, which is now askew and no longer fitted over the manhole per se. To give you a semblance of the forces involved, the tires of the pick-up truck thud upon the manhole cover, and the surprising result is that the manhole cover goes flying, akin to a frisbee disc. Or maybe think of a gigantic and highly dangerous tiddlywink that once you press strongly on the edge, it will go flipping and flying. It is unimaginable that a hefty manhole cover could be so readily hurled into the air, but it does so with seeming ease. 

Anyway, at the same time, it appears that the young man fell back down into the manhole as the pick-up truck proceeded overhead of the partially opened manhole. As you likely know, some manholes are relatively shallow and others can be dozens of feet in depth. In any case, according to the police, the young man died at the scene due to his injuries and was retrieved by the local fire department. 

You might be wondering if the weather or roadway conditions or other factors were at play in this heartbreaking event. The video seems to indicate that the event happened on a Saturday morning around 10:30 a.m. and that there was daylight and the road was dry. This is worthwhile to consider since the matter would likely be given a different consideration if it was nighttime or perhaps the roadway was wet, and it was raining out.   

The driver of the pick-up truck kept driving and is now wanted by the police for questioning.   

In California, there is a legal requirement that a driver must immediately (as soon as practical) come to a stop at an accident scene and remain there, regardless of whether the person believes they were not at fault and even if there isn’t any apparent damages or injuries. Generally, those that do not stop and stay at the scene can be charged with a hit-and-run crime, ranging from being rated as a misdemeanor to a potential felony. In this case, there is also the grim matter of the fatality involved too.   

The driver should have remained at the scene.   

Shifting gears, besides the shocking nature of the incident, we can also consider what lessons might be learned.   

The notion of a manhole cover being pushed up in the midst of an active roadway and without any other indicators or signage to forewarn the occurrence is a seeming outlier or sometimes referred to as an edge or corner case. In the parlance of those that study occurrences of things, an edge or corner case is considered something that only has a remote chance of occurring. It can occur, but only with some semblance of rarity.   

Perhaps this tragedy will spark us all to be cognizant of manhole covers, though on a day-to-day driving basis this would seem a somewhat extreme topic to keep at top of mind. When you are driving your car to work or simply to the local grocery store, you might occasionally take a second glance at those manhole covers. If you do so, please do not become unnecessarily distracted by the manhole covers, since the attention ought to be predominantly on the overall roadway status.   

Speaking of cars, the future of cars consists of AI-based true self-driving cars.   

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: How might an AI-based true self-driving car contend with a manhole cover that is partially out of place?   

I’d like to first clarify what I mean when referring to AI-based true self-driving cars. 

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/ 

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/ 

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/ 

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars; here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/ 

Self-Driving Cars And Manhole Covers   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. 

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. 

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. 

Let’s dive into the myriad aspects that come to play on this topic.   

First, it is important to realize that self-driving cars make use of various sensors to try and detect the driving scene. The types of sensors can include video cameras, radar, LIDAR, ultrasonic units, thermal imaging, and other such devices. All told, these are the eyes and ears of the AI driving system. Data is collected via the sensors and then various onboard computational analyses are made, oftentimes making use of Machine Learning (ML) and Deep Learning (DL). The driving scene is internally estimated based on the patterns found.   

One of the most important aspects to be detected is something that might seem surprising to you, namely the detection of open space that is considered drivable. Humans are so used to eyeing the nature of drivable spaces that it is essentially second nature. You look at a street and can almost instantaneously gauge where the street is, along with where the curbs are and the medians are, and overall ascertain where you can suitably drive your car. 

It seems that this is likely a learned facet. Think back to your childhood. An adult probably pointed out the street to you and told you to look both ways, explaining that cars go there and you have to be watchful of the monstrous beasts. After a while, you gradually became adept at figuring out streets, avenues, highways, etc. Your adult mind can nearly always look at an area around you and quickly identify where a car could be driven and where it could not be driven.   

Using a computer to figure this out is not so easy. The video camera provides visual images that are examined computationally to try and identify the drivable spaces. Radar and LIDAR can be used for this purpose too. Inside the AI driving system, there is a kind of virtual database or computer model being maintained to try and keep track of where the self-driving car can go and where it ought to not go.   

Why have I dragged you through that technological indication?   

Because the key to this discussion is whether or not a self-driving car could detect a manhole and a partially ajar manhole cover.   

Detecting the instance of a manhole that is covered entirely by its manhole cover is relatively straightforward. Likewise, detecting a manhole that is completely uncovered and lacking its manhole cover is somewhat uncomplicated, though this can be tricky in certain circumstances. Recall that earlier I had brought up the factor of various roadway conditions, such as whether it is raining out or nighttime. The roadway conditions could make detecting an open manhole a lot more difficult than it otherwise would normally be.   

The really tough instance is when the manhole cover is only slightly ajar.   

Suppose the visual images coming in from the video cameras are being obscured by the bright sunlight that might be coming directly into the camera lenses. Being able to get a crisp set of images of the street and the manhole could be confounded. The profile of the manhole cover is quite low to the ground and at only an inch thick is going to be hard to either directly see or have any radar or LIDAR returns providing a definitive indication of what is happening, especially as it happens in real-time.   

Another concern is whether the AI developers conceived of the matter entailing a partially ajar manhole cover.   

It could be that the programming of the AI driving system is only devised to determine whether the cover is in place or it is not in place. This might be a binary choice. Until the manhole itself is fully exposed, the internal program might be assigning a probability that the manhole cover is in place and won’t mathematically make the switchover into the manhole cover being out of place until enough imagery or other sensory data seems to turn the tide in that determination.   

You see, the developers and management might have decided that a partially ajar manhole cover is an edge or corner case that for now is not given much priority. The development team is doing all it can to just get their self-driving car to drive safely from point A to point B. Dealing with a partially open manhole is not something that they would likely consider a core part of the driving task.   

The other added twist involves what to do even if the detection takes place with ideal accuracy. 

Overall, the first crucial aspect entails identifying that such a situation exists, and the second and equally vital part of the matter involves the AI driving system computationally laying out what to do about this discovered anomaly in the roadway scene. 

Should the AI driving system have the self-driving car come to a stop? Maybe there isn’t enough time available to stop before striking the manhole cover.   

Should the AI driving system swerve into another lane? But there might be other cars there or pedestrians nearby that could get hit by the veering self-driving car. 

And so on.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/   

Conclusion   

There are many more entanglements to this problem.   

Suppose the self-driving car does strike the manhole cover and takes a beating when the flying manhole cover smacks the underbody of the autonomous vehicle and perhaps rips up the tires.   

Will the AI driving system be able to detect that this has happened?   

As I’ve explained in my columns, the AI driving system only gets informed by whatever the sensors detect and might not be able to readily detect when the vehicle itself has struck something or collided with objects. Hitting and causing the manhole cover to go flying might or might not be detected by the sensors. If the sensors do detect it, the AI driving system has to be sufficiently programmed to interpret what the actions portend.   

A human driver would certainly realize they hit something. They would likely realize it had to be the manhole cover, assuming they saw it at the last moment and felt the car shudder as it banged into the manhole cover. The odds are too that the human driver at the wheel in such a case would hear the sounds of the manhole cover as it clanged back down onto the roadway.   

In short, it seems almost impossible to imagine that a human driver would not know they had struck the partially ajar manhole cover in this kind of situation.   

On the other hand, it is conceivable that an AI driving system might not detect the matter. Or that it detects the matter and has no provision for what to do and nor how to interpret the situation. That being said, all else being equal, most of the AI driving systems are programmed to try and safely come to the soonest practical stop if something untoward seems to have occurred. The AI driving system may ascertain that something unusual has happened and go into a type of failsafe mode that instructs the self-driving car to come to a stop.   

A human driver would presumably know why they are stopping the car and would comprehend the fact that they hit a loose manhole cover. An AI driving system might simply come to a stop because something happened, which is unknown to the AI driving system, and a catchall act entails coming to a prudent halt.   

Now you know quite a bit about this narrow topic.   

Various estimates suggest that there are approximately 20 million manholes in the United States. We can reasonably assume that means there are a corresponding 20 million manhole covers out on our streets too. 

You readily pass over the ubiquitous and seeming unseen manhole covers each and every day. They largely go unnoticed by human drivers. When a manhole cover is missing, it seems nearly all human drivers would tend to notice the gaping hole. Most humans would hopefully and wisely try to divert away from the opening, and possibly warn other drivers about the matter. 

Driving along and encountering a partially ajar manhole cover is something that few have likely ever witnessed, especially when the roadway wasn’t flagged or coned to highlight the dangerous setting. It seems likely you’ll never be in the driver’s seat of your car and see ahead of you a manhole cover as it is being pushed up and out of the way, but if you do, at least now you’ve thought about what to do, and I trust you will do the right thing.   

Keep your eyes open and your wits about you. In addition, as a source of relief, perhaps, I’m sure that the AI driving systems will eventually incorporate the manhole ajar use case, and we can rest easy while riding inside a self-driving car that the autonomous vehicle will do the right thing too.   

That’s an AI driving system that manages to cover all the bases.  

Copyright 2021 Dr. Lance Eliot  http://ai-selfdriving-cars.libsyn.com/website 

Nvidia In the Lead in AI Chips and is Working to Stay There 

By John P. Desmond, AI Trends Editor  

Nearly 100% of AI-accelerator chips are from Nvidia today, and the company cofounded in 1993 by CEO Jensen Huang is working hard to maintain its lead position in AI processing.  

Jensen Huang, CEO and cofounder, Nvidia

Still, the AI landscape now includes many companies engaged in efforts to build the next generation of AI chips, capable of processing ever-increasing workloads in data centers and handling more processing pushing out to edge computers. 

That Nvidia is in a dominant position today in the AI chip market is not in dispute. Its graphic processing unit (GPU) chips were deployed in 2019 in over 97% of AI accelerator instances of hardware used to boost processing speeds, at AWS, Google, Alibaba, and Azure, the top four cloud providers, according to a recent account in Wired UK.  

Nvidia commands “nearly 100%” of the market for training AI algorithms, stated Karl Freund, analyst at Cambrian AI Research. Nearly 70% of the top 500 supercomputers use its GPUs and AI milestones such as the GPT-3 large language model form OpenAI and DeepMind’s board game champion AlphaGo have executed on Nvidia hardware.  

Yet with AI very much in the mainstream with more businesses using it to churn through their data for new insights, and governments budgeting money for research to stay ahead, challengers are competing to catch up. They include: Google, which started making its own chips in 2015; Amazon, which last year began fielding Alexa on its own Inferentia chips based on the acquisition of Annapurna Labs in 2016; Qualcomm with its Cloud AI 100; and IBM working on an energy-efficient design.   

In addition, AMD acquired Xilinx for AI data center work, and Intel has bought startups Nervana Systems in 2016 and Habana Labs in 2019 to boost its AI chip efforts. Startups engaged in the race include Graphcore, SambaNova, Cerebras, Mythic AI, Blaize and TensTorrent, according to the UK Wired account.  

Google’s Announcement of TPU Chips Raised Awareness of AI Chips  

Google’s announcement in May 2016 at its I/O conference of Tensor Processing Units changed the game, alerting investors that AI chips might be a market. The Google TPU is an AI accelerator application-specific integrated circuit (ASIC) developed for neural network machine learning, and in particular, Google’s own TensorFlow software. The company had been using them internally for over a year before the May 2016 announcement.  

Nigel Toon, CEO and cofounder, Graphcore

Andrew Feldman, founder and CEO of Cerebras Systems, making a chip designed for AI, stated that Google’s effort to make its own AI chip was a “swashbuckling strategic decision.” The announcement raised the awareness of investors, who had been less than responsive to startups looking to build new AI chips. Simon Knowles, cofounder and CTO of Graphcore, a Briston, UK startup which makes the Intelligent Processing Unit for AI, stated, “Suddenly all the VCs were like, where are those crazy Brits?” Graphcore has now raised $682 million and Cerebras has raised $122 million to date, according to Crunchbase.  

The rivals argue that with its roots in gaming software designed for graphics instead of machine learning, Nvidia has achieved market dominance by effective optimization of its software. “NVIDIA has done a fabulous job hiding the complexity of a GPU,” stated Graphcore co-founder and CEO Nigel Toon. “It works because of the software libraries they’ve created, the frameworks and the optimizations that allow the complexity to be hidden. It’s a really heavy lifting job that Nvidia has undertaken there.”  

Nvidia CEO and cofounder Jensen Huang pushed back on this line of thinking in a recent interview in Time, in which he stated, “We always started as a computing company. It just turned out that our first killer app was video games.” They made a projection that the 3-D graphics market was going to be large and technically demanding. “Video games were one of the best strategic decisions we ever made,” he stated.  

AI turned out to be a good market too. “We’re in the process of automating intelligence so that we can augment ours,” Huang stated. “And we can automate intelligence to operate at the speed of light, and because of computers, we can automate intelligence and scale it out globally instantaneously. Every single one of the large industries will be revolutionized because of it. When you talk about the smartphone, it completely revolutionized the phone industry. We’re about to see the same thing happen to agriculture, to food production, to health care, to manufacturing, to logistics, to customer care, to transportation.”  

Nvidia Sees Opportunity in Auto Industry  

In transportation, Nvidia sees opportunity in the automotive industry. “The car industry could start thinking about their cars not as vehicles that they sell, but as part of their fleet and their installed base,” Huang stated to Time. “They could provide software services over the course of the 15, 20 years that these cars are on the road. So the business model is transformed.”  

The investment community certainly sees Nvidia as a good investment today. A recent account from Nasdaq written by analysts at The Motley Fool, a financial advice company, is headlined, “Why Nvidia is a Top Growth Stock to Buy Right Now.”  

Among the reasons was the strength in its core video gaming graphics and data center markets. The company’s recent earnings exceeded expectations, with Q2 revenue increasing 68% from the previous year to $6.51 billion. The gaming segment contributed 47% of total revenue, on the strength of the Ampere GPUs, and Nvidia controls 80% of the PC gaming hardware market. 

Meanwhile, the growth in cloud computing is helping Nvidia’s data center business. “The business’s strong growth was driven by the increasing adoption of its data center chips by hyperscale and cloud computing customers,” which include Google, Amazon, and Microsoft, stated the Motley Fool analysts.   

Nvidia’s automotive business saw $152 million in revenue in Q2, a 37% increase over last year despite a slowing in the auto industry. The analysts report that Nvidia has $8 billion invested in automotive designs that it will be looking to translate into revenue in the coming years. “Nvidia could soon count on another growth driver,” the account stated.  

Read the source articles and information in Wired UKin Time and from Nasdaq. 

After Some Success, Companies Seek Ways to Accelerate AI Adoption  

By AI Trends Staff  

Companies who have some success with their initial AI projects are seeking ways to accelerate adoption to deliver more value to the business. One researcher has defined an AI Adoption Maturity Model that presents a roadmap for accelerating AI adoption.  

Dr. Michael Wu, chief AI strategist, PROS Holdings

The first stage of the six-step AI adoption maturity model is the digitization of work, turning work in the physical world into digital processes that can be tracked and recorded as data, suggests Dr. Michael Wu, chief AI strategist for PROS Holdings, providing AI-based software as a service for pricing optimization, with a focus on the airline industry.  

“This stage is all about getting the data, which is the raw material for AI,” stated Wu, in an account from ZDNet. “If you are on the digital transformation bandwagon, good for you. You are already in Stage 1 of this maturity curve.” 

Wu felt qualified to create his own AI adoption maturity model from his work in applying data science expertise to analyze consumer behaviors on social media for over 10 years in his previous roles. These included chief scientist at Lithium Technologies, now called Khoros, offering software for customer engagement, online community management, and social media analytics.    

The move to contactless digital technologies will speed the adoption of AI because it is required to scale to the global nature of ecommerce, and because AI is a differentiator against online competition. “Every digital business must and will eventually adopt some form of AI,” he states.  

The critical mass of data required to train AI is high, so it takes a long time for companies to achieve the necessary data volume. In the meantime, companies can derive value from the data they have captured by performing analytics to help managers make better decisions.   

Most companies will begin with descriptive analytics that summarize the data they have collected, in reports and dashboards, perhaps powered by business intelligence tools. As the volume and diversity of the data assets grow, the company may be ready to perform predictive analytics. For example, a manufacturing plant may use mechanical operation data to infer the failure time of certain machinery in order to perform predictive maintenance, Wu suggests.  

Eventually, the company has gathered enough data to perform prescriptive analytics, used to prescribe actions that optimize some outcomes. For example, a pricing recommendation for a product is set to optimize revenue. Marketing automation can prescribe the engagement frequency for a prospect, to optimize the opportunity to convert prospects to customers. Now the organization is ready for the next stage of the maturity curve. 

Wu makes the point that the steps within the first stage of the maturity model are “vendor-agnostic, technology-agnostic and use-case-agnostic.” Rather, they are derived from “social science principles,” which he describes as “not like the fundamental laws in math, physics, or chemistry that are practically absolute. This means there will be exceptions, albeit rare.”  

The successive steps in Wu’s AI Maturity Model he derived from principles in behavioral economics and psychology. He states, “So they are agnostic to vendors, technologies, use cases, industries, business models, etc.”   

The second stage in Wu’s AI Maturity Model is exchange data for automation. “Prescriptive analytics provide the natural transition into this stage,” Wu states. Automate the actions that the AI has optimized, is the suggestion. “Once the AI is trained, it would be able to help us automate that aspect of our work by mimicking our decisions and actions. So essentially, you are exchanging your data for automation,” Wu states.  

To commit to this stage, he suggests that one needs to have faith in the machine, and let it run. “We must be comfortable with letting machines make the call (at least when it’s sufficiently confident) under human supervision,” he states.  

The rest of Wu’s steps provide a roadmap for AI adoption. “Together with AI, I believe we can tackle the biggest challenges facing humanity,” he states. 

Accelerating Adoption After Initial Success  

Organizations that have started out on their road map to AI and have experienced some success, wonder how the process can be accelerated. During the pandemic in 2020, companies put AI through some paces.   

David Tareen, director of AI and analytics, SAS

“The pandemic put AI and chatbots in place to answer a flood of pandemic-related questions,” stated David Tareen, director of AI and analytics at SAS, in an account from The Enterprisers Project. “Computer vision supported social distancing efforts. Machine learning models have become indispensable for modeling the effects of the reopening process.”  

With these initial successes, companies begin to see the way to higher potential value to the business from more AI projects.  “If there’s one reason IT leaders should accelerate the broader adoption of AI, it’s the ability to uncover opportunities that generate real business value through insights and efficiencies where perhaps there were none,” stated Josh Perkins, field CTO at AHEAD, which is a Gold Cloud partner with Microsoft Azure.   

One suggestion is to identify the best use cases and begin with those. “AI and machine learning efforts are best directed at specific use cases, and it may require engaging a broader ecosystem to bring it to life, especially if you have a paucity of AI and ML talent,” suggests Peter A. High, president of Metis Strategy, business and technology consultants.   

Another suggestion is to manage to milestones. “One overlooked challenge with AI initiatives is the time commitment required before tangible results can be delivered,” stated Ravi Rajan, head of data science at Cowbell Cyber, a cyber insurance company. “Without clear goals and planned milestones to show progress, AI projects can rapidly turn into discovery.”  

Regarding staffing for AI, experts suggest a multi-pronged approach to skills acquisition. Expertise in big data, process automation, cybersecurity, human-machine interaction design, robotics engineers and machine learning experts are in demand. Finding needed expertise has become a creative endeavor calling for innovative approaches.   

“In addition to having sophisticated hiring and retention plans, organizations need to work harder to leverage the talent they already have,” states Ben Pring, VP and director of the Cognizant Center for the Future of Work.  “A root-and-branch reform of upskilling and internal career progression is an important element of the multi-factor HR strategy necessary to succeed at this foundational task.”  

Pring is a coauthor of the book, What To Do When Machines Do Everything (2017), and the founder of Cognizant’s Center in 2011. He had worked at Gartner for over 14 years previously.  

Oak Ridge National Lab Worked with Partners to Speed Adoption 

In nuclear energy, engineers at the Oak Ridge National Laboratory (ORNL) found a way to accelerate AI adoption by working with partners.   

“Industry turns to ORNL for scientific and engineering expertise and world-class facilities that can’t easily be replicated,” stated Kathy McCarthy, Associate Laboratory Director of the Fusion and Fission Energy and Science Directorate, in a press release. “Here our researchers share some of the impacts and success of their current industry partnerships.”  

Collaboration led to the production this spring of four fuel assembly brackets produced by 3-D printers that have been successfully installed and are now operating at the Tennessee Valley Authority’s Browns Ferry Nuclear Plant Unit 2 in Athens, Alabama. The components are expected to remain in the reactor for six years, with routine inspections.  

Produced at ORNL’s Manufacturing Demonstration Facility, they were developed in collaboration with Framatome, TVA, and the DOE Office of Nuclear Energy–funded Transformational Challenge Reactor (TCR) program based at ORNL.   

“It took all three vantage points—industry manufacturer, electric utility, and national laboratory—to make this milestone possible,” stated Ben Betzler, TCR program director. “Through this collaboration, we’ve shown that it is indeed possible to deliver a 3D-printed component qualified to operate in what is one of the nation’s most highly regulated industries.”  

Researchers in the TCR program are leading 13 projects selected for DOE’s INFUSE (Innovation Network for Fusion Energy) initiative. 

Read the source articles and information from ZDNet, from The Enterprisers Project and in a press release from the Oak Ridge National Laboratory. 

Funny AI: Review of AI in Cartoons and Comic Strips 

By John P. Desmond, AI Trends Editor  

Inevitably, AI has become a ripe subject for cartoons and comic strips. Cartoons are sometimes used to teach new concepts or present difficult material, as a way to ease into it. Cartoon comics designed to make people laugh often tap into fears that AI will be taking over for humans, rendering humans useless. No one wants to be useless.  

Here is a review of some AI cartoons.  

Scott Adams is the creator of Dilbert, the office worker who is an IT guy. The strip captures mismanagement and workplace inefficiency, bringing it to life with a cast of characters that tend to be lazy, holding onto their middle management positions.   

In one example, Dilbert’s boss says, “I want you to create artificial intelligence that is as smart as me.” Dilbert says, “Okay, I should have that by lunchtime.” His boss says, “Because you’re a fast worker?” Dilbert says, “Sure.” 

Describing why something is funny is not funny, but here it goes: it’s funny because Dilbert is making fun of his boss. AI doesn’t have much to do with it, except the less than brilliant boss is easily fooled around the subject.   

Timo Elliott describes himself on his website as a “passionate advocate of AI,” as well as a cartoonist, writer, and for his day job, a VP and Global Innovation Evangelist for SAP in Paris, where he has worked for over 14 years, according to LinkedIn. Elliott offers cartoons from his website for anyone to use, as long as you write to him and let him know how.   

One of Elliott’s cartoons shows two bald businessmen in suits, going back and forth on some issue, with a younger guy with a laptop at the table, who says, “When you two have finished arguing your opinions, I actually have data!”  (See cartoon above.)  

This speaks to the notion that it’s the younger generations who have a leg up on what AI is and how to apply it to the business. The older folks would do well to encourage them and listen.  

Another cartoon from Elliott shows four robots sitting on a bench, while a job candidate talks to a lady receptionist, who says, “Actually, yes, we did let an algorithm choose the shortlist of candidates… Why do you ask?”  

This gets at a couple of issues: one, the problem of bias in data producing essentially a self-selected sample, the challenges overall in injecting AI into applications that evaluate candidates for hire, and the fear about robots taking over and replacing humans.  

KDnuggets on Ethics and Self-Driving Cars  

Gregory Piatetsky-Shapiro, president and cofounder, KDnuggets

The founder of KDnuggets, the website focused on technical topics including AI, analytics, big data and data science, also publishes some cartoons. In a recent post, Gregory Piatetsky-Shapiro, the president of KDNuggets, published a cartoon on the challenges around programming ethics into self-driving cars.  

In the classroom, one of the cars asks the professor, “So what should we do? Drive straight and run over a baby or swerve and hit two pedestrians?” Then the car adds a follow-up question, “What if one of the pedestrians is your mother-in-law?” 

Mother-in-law jokes are dated, old as the hills. We know that. This one does present a real problem for programmers of self-driving car routines, on how to handle an ethical dilemma in a real-world driving scenario. It’s not only an ethical dilemma, it’s a super technical challenge. The AI would need to detect that a baby is at risk, then it would have to factor in which life was more valuable, the baby’s who is just starting out, or the older folks who have been around for a while? That’s some challenging coding to have to deal with, level 5 stuff, which our AI Trends Insider columnist Lance Eliot has written about many times in AI Trends.     

Here is another one from KDnuggets: The hiring lady is saying to the job candidate sitting in front of her, “I’m a little surprised. With such extensive experience in predictive analysis, you should’ve known we wouldn’t hire you.”   

One site, Artificial Intelligence Comic Strips, put out by The Cartoonist Group, gathers cartoons from over 18 newspaper comic strips.  

In an episode of Speed Bump by Dave Coverly, two robots are sitting on a couch across from a couple, and everyone has a drink. One robot says, “Do you have to use finger quotes every time you mention my artificial ‘intelligence?’” That gets at the fear that robots have become so human-like, we have to worry about offending them.   

This AI Generates the Cartoon AI 

Now, inevitably as well, we have a category of cartoons generated with the help of AI on an AI subject.  

Ilan Manouach, Cofounder and CEO, Applied Memetic

Comics artist Ilan Manouach and AI engineer Ioannis Siglidis are behind the Neural Yorker published in The New Yorker magazine. Manouach is an artist, researcher, strategy consultant and founder of Applied Memetic, based in Brussels. The firm is described as a synthetic media company using AI to understand and expand human creativity.   

The two developed an AI engine that posts computer-generated cartoons on Twitter, according to an account in Hyperallergic. The image-and-caption combinations are produced by a generative adversarial network (GAN), a deep-learning-based model. The network is trained using a database of punchlines and images of cartoons found online. It then “learns” to create new gags in the New Yorker‘s iconic style, with sometimes hilarious and unsettling results.  

Here is one example: “I think we ought to talk about something very substantive,” says what looks like sperm in the process of fertilizing an egg.  

Another example: “Please Harry, no I cannot use your name,” says a woman to what looks like a desktop computer. One observation is, because the cartoons are drawn by the AI, they are not so easy to understand or convey.  

Manouach and Siglidis stated to Hyperallergic, “The cartoon format thrives on quirkiness, absurdity, arbitrariness, and cheap artifice in order to get their simple message through.”  

“Cartooning is paradoxically a 21st-century art form catering to a readership with limited attention for a quick visual gratification fix,” they stated. “The Neural Yorker explores the limits of an important feature in the history and the modes of address of cartoon making: the non sequitur.”  

A non sequitur is a conclusion that does not follow or something not related to anything previously said. The AI has an advantage in that it essentially doesn’t know what it’s doing, all by itself. The Hyperallergic account described the strip as “highlighting the subjectivity of humor.”  

Manouach is currently co-editing a glossary on artificial intelligence and working on Fastwalkers, a manga comic book written with emergent AI. Siglidis is starting a PhD program on Deep Learning and Computer Vision at the École des Ponts ParisTech. 

Read the source articles and information at Dilbert, on the website of Timo Elliott, at KDnuggets, on the AI Trends website of Lance Eliot and in Hyperallergic.