DeFi: What it is, dApps, DEXs and Top 10 DeFi Exchanges

Please read our disclaimer on investment related topics before proceeding. One idea behind blockchain technology is decentralization and making applications run without having the need for central bodies. In the traditional…

The post DeFi: What it is, dApps, DEXs and Top 10 DeFi Exchanges first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

10 Most Innovative Countries: A data-driven analysis

Local solutions are still important. Local languages, customs, and laws increase the need for local partnerships. We have listed B2B high-tech solution providers by country so that you can find…

The post 10 Most Innovative Countries: A data-driven analysis first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

Top 6 Applications of Machine Learning (ML) in Process Mining

The global process analytics market size is reported as $185 million in 2018. For more than a decade, machine learning has been applied to traditional process mining. Some of these…

The post Top 6 Applications of Machine Learning (ML) in Process Mining first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

Top 5 Outsourcing Best Practices for Organizations

It was predicted that IT outsourcing market will grow by $98 billion between 2020-2024 and 40% of small businesses outsource a business process. However, 20-25% of outsourced projects fail within…

The post Top 5 Outsourcing Best Practices for Organizations first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

Process Mining in Healthcare: An Ultimate Guide

The global process analytics market size is expected to reach ~$1.42B, and process mining has been applied successfully in various domains, such as banking, insurance, logistics, production, e-government, customer relationship…

The post Process Mining in Healthcare: An Ultimate Guide first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

How Electric Vehicle Manufacturers Employ AI Strategically 

By AI Trends Staff 

The auto industry is transforming into a value network, driven by the technological development of electric-connected autonomous and shared (ECAS) vehicles architectures, systems intelligence, new computing paradigms at the edge and swarm capabilities into vehicle domains. 

That is from a report from the journal Frontiers in Future Transportation, a company exploring frontiers of innovation in future transportation systems, with offices in Lausanne, Switzerland.  

AI is important to this evolution. “The safety and transparency of AI-based technology and the explainability of the purpose, use, benefits, and limitations of AI systems are critical for fulfilling trustworthiness requirements,” the report states. 

The primary manufacturers of electric vehicles have a range of approaches toward incorporating AI.   

The leading electric vehicle manufacturer is Tesla with a 15% market share, followed by Volkswagen Group with 13%, and SGMW with 11% (SGMW is a joint venture of SAIC Motor, General Motors and Liuzhou Wuling Motors of China, according to a report from market analyst firm Canalys in The Washington Post. 

China Now the World’s Largest Automotive Market  

Tesla benefits from strong sales in China, the US, and Europe, based on the number of vehicles sold. Chinese manufacturers are beginning to be more competitive, especially inside China, the world’s largest automotive market, according to Canalys.   

SGMW, the joint venture between General Motors and two Chinese automakers, has rapidly gained market share owing to booming sales in China of a tiny electric car that can travel at top speeds of 62 miles an hour. The Hong Guang Mini EV hit the market last year at a starting price of $4,500, quickly becoming a top seller to budget-conscious Chinese shoppers. 

The Chinese brands BYD, Great Wall Motors, and Nio are also popular EVs inside China, according to data from Credit Suisse. 

The Chinese government has spent at least $60 billion to support its electric-car industry, including research-and-development funding, tax exemptions and financing for battery-charging stations, according to the Center for Strategic and International Studies, a Washington think tank. Chinese brands so far have not made much progress outside China. 

As of May this year, the Chevrolet Bolt ranked third among electric vehicles in the United States, behind the Tesla models Y and 3, according to Credit Suisse. The Ford Mustang Mach-E ranked fourth, according to Canalys.  

AI Strategic to Battery Maintenance, Innovation  

The arrival of AI has disrupted the electric vehicle ecosystem, especially in the area of battery development, according to an account in Analytics India Magazine. 

AI is especially important to the maintenance and monitoring of EV batteries, which vary widely in their ranges. The Tesla Model S 100D can run for 355 miles; the Hyundai Kona can run for 198 miles, and the MG ZS EV can run for 214 miles, according to the account. Charging times also vary. A Tesla can take 75 minutes for a full charge, for example. 

Rikard Vinge, data scientist, Volvo

“Batteries are highly complex components—chemically, electrically, and in terms of software,” stated Rikard Vinge, data scientist at Volvo. “But data analytics can help us to understand them better: how they are used, how they are affected by the driver, how they are impacted by other components in the vehicle.”  

Machine learning models help with battery life cycle management. Blending advanced electronics with IoT, data science and digital twins, ML models with predictive intelligence can anticipate battery life, identify degradation breakdowns and their causes. Akhil Aryan, the cofounder of ION Energy, a company that applies intelligent battery analytics to improve the performance of lithium-ion batteries, says data on battery life includes performance, state of charge, stress from rapid acceleration and deceleration, temperature and the number of charge cycles.   

Many collaborations are happening around AI for EVs. Google is collaborating with Ford, which aims to use Google’s AI, data science and analytical capabilities to improve customer experiences, accelerate the modernization of product development, manufacturing and supply chain management, and fast track the implementation of data-driven business models. 

Tesla leverages machine learning and AI in its driving system and over-the-air update capability. Volkswagen and Microsoft are planning to use Azure IoT and PowerBI to offer customers telematics and productivity solutions.  

At Tesla’s AI Day, Work on Computer Vision System Described 

At its recent AI Day, Tesla’s head of AI Andrej Karpathy outlined work going on to improve the car’s computer vision. The two main problems that Tesla is working within its computer vision architecture, he indicated in an account in TechCrunch, are temporary occlusions (such as cars at busy intersections that block the Autopilot’s view of the road beyond), and signs or markings that appear some distance from when lanes merge.  

To help solve the problems, Karpathy described work Tesla engineers are doing on a spatial recurring network, wherein different aspects of the module keep track of different aspects of the road and form a space-based and time-based queue. That creates a cache of data that the model can refer back to when trying to make predictions about the road.  

The AI team includes a simulation, creating “a video game with Autopilot as the player,” Karpathy stated, to help the car make the right decisions.  

Volkswagen Opens AI Detroit Unit to Tap Local Automotive Expertise 

In the US Motor City of Detroit, Volkswagen of North America this year opened its AI Detroit unit, a dedicated AI research and development center. 

“We want to leverage new opportunities in applied Artificial Intelligence to further improve products and services for our customers, supporting our employees and become even more efficient as a company,” stated Johan de Nysschen, chief operating officer at Volkswagen Group of America, in an account in Automotive World.  

Abdallah Shanti, chief information officer for the Volkswagen Passenger Cars brand, stated, “Volkswagen has a tremendous treasure of data. Competencies in Artificial Intelligence will help us leverage this treasure, and in supporting people and business,” 

Daniel Weimer, head of AI Detroit

Daniel Weimer, head of AI Detroit, leads a team of AI scientists and software engineers who seek to apply the latest AI and machine learning breakthroughs to the automotive business. “We want to bring AI technology to business and create real impact,” Weimer stated.  

The plan is for Weimer and his team to monitor the latest developments in AI, from new techniques in machine learning to new software, then work across Volkswagen Group of America’s different brands—from Volkswagen to Audi and Bentley—to find ways to implement those insights. 

“We’re in Detroit for a reason. You can knock on every door here and there’ll be someone with a high automotive IQ,” Weimer stated. “There’s all this awesome infrastructure and talent here, along with a great attitude of wanting to redefine the industry. There’s no other place where you find that.”  

Read the source articles and information in the journal Frontiers in Future Transportationin The Washington Post, from the Center for Strategic and International Studies, in Analytics India Magazine and in Automotive World. 

The Infamous Trolley Problem At Large-Scale Sideswipes AI Autonomous Cars 

By Lance Eliot, the AI Trends Insider 

Think about all of those moment-to-moment rapid decisions that you make while driving a car.   

Go ahead, do a slow-motion post-driving analysis in your mind. Think about a recent trip to the grocery store or perhaps a driving trek to a local mall.   

Whether you realize it or not, there were hundreds upon hundreds or likely thousands of minuscule driving decisions that you made, all part of a larger web of driving decisions in the course of a driving journey. And, notably, they all ultimately encompassed some variant of life-or-death considerations.   

How so?   

Imagine that you are driving along amid city streets on an otherwise ordinary day.   

If you decide to take that upcoming right turn just a bit fast, there is a heightened risk that you could inadvertently go awry. You might veer into a pedestrian that is standing out at the curb edge. You might swing wide and brush against another car that is in a nearby lane.   

Bam, you smack into someone.   

In case you are doubtful that these are decidedly life-or-death-related decisions, I have some somber stats to share with you.   

There are about 6.7 million car crashes each year in the United States alone. Those car crashes produce approximately 2.5 million injuries and over 40,000 human fatalities. Few of us are contemplating those ominous stats when we get behind the wheel of a car. Nonetheless, they are quite illuminating numbers and worthy of due pause whenever undertaking a driving trip.   

Now that you are thinking about all of those itsy-bitsy decisions and how vital they are, let’s zoom to a more macroscopic perspective.   

You are one person that is driving on a city street. We zoom out and there are lots of other cars also driving on that same city street. We continue to zoom out and can see lots and lots of cars throughout the entire city, all being driven by a human at the wheel. Keep zooming out, and you’ll see cars being driven throughout an entire geographical region, and then throughout the entire United States.   

There are around 250 million registered automobiles in the United States. Of course, at any singular point in time, not all of them are necessarily underway. We do keep our cars parked and ostensibly stationary for about 95% or more of their available usage time. In any case, when we do use our cars, the aggregate number of miles traveled annually is estimated at 3.2 trillion miles in the United States alone. A typical everyday driver probably drives around 12,000 miles per year, which varies depending upon where you live and what type of work you do. 

What does this driving-related zoom-in to zoom-out visioning get us?   

The choices that we make as drivers are a kind of collective conglomeration. By adding up all those seemingly minuscule acts, we can begin to detect patterns of widespread behaviors. Those patterns can reveal that the day-to-day moment-to-moment driving decisions might overall be increasing our risks while driving a car, or can possibly be reducing our risks, contingent on what choices are being made. 

Let’s use an example to go from the considered microscopic to the macroscopic.   

You are once again at the wheel of a car and driving along in a city that has the usual hustle and bustle taking place. There are other cars around you. Pedestrians are on the sidewalks, and some are jaywalking across the busy streets. Bike riders are amidst all this chaos. The usual zoo of wild and crazy traffic endangerment exists. 

Just another day in paradise, as they say.   

Being late for getting to work, you are driving with a bit of a rush. This does not imply that you are recklessly driving. It is just that you are driving with a certain amount of verve and zest, hoping to shave some time off your daily commute to the office.    

At one point during your driving trek, you are in a major artery of the city that has two lanes going southbound, plus two other lanes going northbound. For the moment, you are in the rightmost lane of those two northbound lanes.   

Traffic is running along smoothly, but busily so. Speeds are around 45 miles per hour. In addition to the lanes of car traffic, there is a bike lane on your side of the street. The bike lane is to your right. Bike riders are actively using that bike lane. The time of day is around rush hour, and thus many cars and trucks are flowing along in traffic. Plus, there are lots of bike riders occupying the bike lane.   

Hopefully, you’ve got a pretty good picture in your mind of the driving setting.   

Suddenly, you realize that a large truck, one of those massive moving vans has come up beside your car, doing so in the northbound left lane. The truck is blocking your view of the southbound lanes, but that’s okay, since you don’t need to see the traffic that is heading in the opposite direction of your travel. 

In a sense, you are now trapped in your right lane. To your left is this big lurking truck. There is traffic directly ahead of you in your right lane. There is traffic behind you in your right lane. You have your own little pocket, as it were, sitting tightly squeezed between all those other interlopers.   

And there is the bike lane to be considered too.   

Please keep in mind that you are flowing along at around 45 miles per hour. Your vehicle is in motion. All the other cars and the truck next to you are also in motion. Turns out that there are bike riders in the bike lane, and they too are in motion.   

Here’s what comes next. 

The moving van is veering somewhat in its lane. The driver of the truck is having a hard time keeping the lengthy vehicle in its own lane. This is a large, unwieldy truck and the roadway is not the best. The lanes are slightly askew, and the roadway surface is rather rough.  

Meanwhile, some of the bike riders decide they are being held back by some slower bike riders ahead of them. As such, several of these bikers are opting to pass the slower ones. The act of passing them though is going to bunch them all up for a few split seconds, essentially completely occupying the entire width of the bike lane. Indeed, the odds are that some of the bike riders are going to slop over into the regular lane of traffic as they make their dicey and rapid movement around the slowpoke bike riders.   

Are you ready to make a snap-driving decision? The everyday squeeze play has become the ultra-squeeze play. 

You can try to stay squarely in the middle of your lane.   

In that case, you are hoping that there is sufficient clearance for the swaying truck on your left and enough clearance for those bikes that are bulging out of the bike lane and into your lane of traffic from your right. 

Or you might decide to ride toward the edge of your lane. Which edge? Do you choose to align with the right edge or the left edge of your lane?   

By choosing the right edge, doing so would keep you further away from the truck, at least as much as is feasible at this juncture. That truck is an imposing figure and rather foreboding. It is extremely bulky and heavy, for which a glancing blow from the truck into your underway car is bound to be grievously problematic, possibly even fatal. You might be wise to give the most clearance that you can to the wayward truck.   

But whoa, don’t forget about those bike riders!   

The bike riders are extremely vulnerable. The odds are that if you brushed against a bike rider, it would be really bad times for that person. In fact, by striking one of them, there is a chance that several of the bike riders would go down to the ground all at the same time, akin to a bowling ball striking bowling pins. Those battered bike riders would undoubtedly get hurt, and there is a chance of fatalities depending upon how things go.   

You are now in the proverbial situation of being between a rock and hard place.   

On one side, you are trying to stay afield of the menacing truck. This is probably safer for you; a collision with that truck is likely to be lethal to you. But you need to stay afield of those bike riders, too. There is little doubt that colliding with the bike riders would probably not be especially injurious to you. Sadly, it would most likely be severely injurious if not fatal to them.   

Choices, choices, choices.   

Some might assert that it is up to fate to decide. Just stay in the middle of your lane. Whatever happens, is going to happen. Don’t sweat it. You are rightfully able to be in your lane, and by staying in the middle you will always have a clear conscience about what you did.   

Maybe so, maybe not.   

Suppose you do remain in the middle. It could be that the truck marginally veers into your lane and rams into your car. Had you been just a tad toward the right edge of your lane, the truck and your car would have never touched each other. Because you opted to doggedly be in the middle of your lane, the collision takes place. Bad news for you.   

The point is that none of the options are risk-free, and nor are any of them outcome-free.   

Whichever choice you make, it could turn out to be the “wrong” choice in terms of having an adverse result. Driving is a game of playing the odds, though by referring to the matter as a “game” we need to realize that this is a life-or-death gambit. It is not a playful game. It is a serious contest, encompassing probabilities and erstwhile chances that can turn an innocent moment into a heart-wrenching and irreversibly unfavorable one.   

Which choice did you make?   

I don’t want you to be thinking that there is a wrong choice or a right choice per se. That’s not what the scenario is trying to portend. The emphasis of this setting is that you are being called upon throughout a driving journey to make essential driving decisions. Those driving decisions are vital. You often make them in a split second. They come and go, like a river that flows endlessly.   

Most of the time, those driving decisions are not especially notable. In this case, perhaps you stayed in the middle and everything turned out okay, or maybe you went to the right edge or the left edge, and everything turned out okay. I would dare say that you would not likely remember the next day that you had that precipitous decision to make. 

If you tried to remember all the hundreds or thousands of driving choices in each driving trek, you’d probably go bonkers. It just isn’t a reasonable thing to do. Sure, some of the more monumental ones will probably stick with you. For example, this particular scenario could stay in your mind for a long time, especially if it was a real squeaker and the situation flared to a point of nearly having gotten struck or nearly having struck a bike rider.   

Assuming that no one got hurt and there was no collision of any kind, the chances are that this instance would eventually recede in your memory banks. You might recall it from time to time, particularly if telling tall tales about some of your harrowing driving experiences. 

You aren’t the only one to have ever been in a situation like this. Being pinned between the wayward truck and those boisterous bike riders is undoubtedly something that happens with some frequency. It happens each day. It happens in your city, and it happens in many other locales.   

Consider that daily, zillions of drivers are making the same decision that you were just confronted with. This happens throughout the day. It happens over a period of a year, over and over again. We could end up with zillions upon zillions of those specific decisions being made.   

Here is the kicker. 

If those drivers all made independent decisions, we would expect that there might be an equal chance of staying in the middle versus going toward the right edge or the left edge. We could look at a statistical distribution and see that in the aggregate, there was an equal chance of which way drivers were opting to go.   

On the other hand, drivers might have a propensity or specific tendency that would become apparent by examining the aggregated instance. Suppose the numbers showed that, by and large, the drivers went to the right edge. Overall, they seemed to be choosing to get away from the truck, though this was simultaneously increasing the risks of hitting the bike riders.   

In any real-world sense of things, we do not have any practical means to ferret out this kind of decision-making in the aggregate.   

We aren’t able to collect the zillions of daily driving decisions being made by each driver, and we cannot then add those up into a convenient database that would show us all the zillions upon zillions of those itty-bitty driving decisions made by the over 225 million licensed drivers in the United States. 

That’s a darned shame.  

Wait for a second, wave a magic wand, and pretend that we could collect that humongous dataset.   

This would enable all sorts of insightful analyses on driving and driver behaviors. We might discover that human drivers are more prone to making one type of decision over another, and that this has heightened their risk of incurring a car crash. We could then try to educate drivers accordingly, aiming to change behavior toward being less risk-prone. We might redesign our cars, or at least the driving controls, accordingly. Furthermore, we might alter the roadways and our infrastructure, doing so to minimize the bad choices and maximize the good choices.   

That kind of data collection and analysis could make a big difference in reducing the number of annual injuries and fatalities associated with driving a car. Sorry to say though that we don’t have a magic wand, but we do have something nearly “magical” that is gradually arising, namely the advent of AI-based true self-driving cars.   

Let’s talk about self-driving cars.   

The future of cars consists of AI-based true self-driving cars. There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: How might driving decisions at the micro-level and also at the macroscopic aggregated level be pertinent to the advent of AI-based true self-driving cars?   

Before we consider this, I’d like to further clarify what is meant when I refer to true self-driving cars.   

For my analysis about why the Trolley problem applies to AI autonomous cars, see the link here: https://www.aitrends.com/ai-insider/the-trolley-problem-undeniably-applies-to-ai-autonomous-cars/   

For my discussion about the ethically ambiguous aspects encompassing the Trolley problem and self-driving cars, see the link here: https://www.aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

Understanding The Levels Of Self-Driving Cars   

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.   

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). 

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).  

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/  

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/ 

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And The Statistical Trolley Dilemma 

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can. 

Why is this added emphasis about the AI not being sentient? 

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. 

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car. 

Let’s dive into the myriad of aspects that come to play on this topic.   

Driving a car is tantamount to making ongoing life-or-death decisions, as noted earlier in this discussion.   

The example of being squeezed between a large truck to your left and those bike riders to your right was a simple and yet commonly encountered instance of life-or-death driving choices. The lives involved or on the line can include the driver of the car, the passengers that might be in that car, the drivers of nearby cars, the passengers inside those nearby cars, pedestrians nearby, bike riders nearby, and so on.   

Some would argue that life-or-death driving decisions are extremely rare. 

Their viewpoint is that maybe once in a lifetime as a driver, you might come upon a situation that incorporates life-or-death choices. Otherwise, for some 99.999% of your driving experiences, you won’t presumably have any such grave matters to consider.   

This is an extremely offbeat way to characterize driving a multi-ton vehicle that is able to move at tremendous speeds and convey enormous physical forces. I submit that we indeed are all faced most of the time with life-or-death driving choices. It is perhaps 99.999% of the time that we, fortunately, make the correct or sufficiently apt decisions and avert getting into dire circumstances.   

Despite having managed to elude the grim reaper much of the time, the struggle of confronting imminent life-or-death while driving, or shall we also say at least the possibility of injury-or-noninjury is a constant one and not a rarity.   

There is a famous or some would say infamous mind-bending exercise known as the Trolley Problem that has garnered a great deal of debate and angst in the self-driving car industry and pertains to the weighty decisions involved in driving a car.   

Many pundits and vendors in the self-driving car niche are quick to claim that the Trolley Problem is irrelevant to the advent of self-driving cars. They decry the Trolley Problem as purely theoretical, impractical, and a nonsensical distractor from the realities of driverless cars. 

Those making such an argument are flat-out wrong.   

They either misunderstand how to apply the Trolley Problem to the matter of self-driving cars, or they are wishful that it should not be considered applicable. Their wishful thinking at this time allows them to disregard or downplay the issues raised. This in turn can provide a basis for not encompassing Trolley Problem-related solutions into their AI driving systems.   

I’ve predicted that those that take a head-in-the-sand approach to this topic will find themselves and their companies on the legal hook down the road. Eventually, there are going to be gargantuan lawsuits against many of the automakers and self-driving tech firms on their paucity of Trolley Problem considerations. Only then will they apparently take seriously the Trolley Problem, though it will assuredly be the case that they will still fight tooth-and-nail against it, hoping to avoid mega-sized legal losses.   

As a quick primer about the Trolley Problem, it is a relatively straightforward thought experiment. Imagine that you are standing at a train track and have access to a control that will shunt a trolley onto one of two forking tracks. On one of the forked tracks is a person that is tied down to the rails and cannot get away. On the other forked track, there are three people tied down to the rails and unable to escape (the number of people tied down varies by how the setup is envisioned, sometimes five people are mentioned rather than three, etc.).   

Which direction do you decide to send the oncoming trolley?   

It is a devilish problem. You are either going to choose to kill one person or three people. The inclination by some is that they won’t move the switch at all, thereby avoiding having to make a choice. That’s not really a means to avoid the issue, since the switch is already preset to go onto one track or the other. Your attempt to avoid being involved will nonetheless still produce death.   

I won’t get into all the details. The main point is that you are at times faced with very difficult life-or-death situations, and you need to make the most horrible of choices.   

Similarly, the example of being squeezed between the large truck and the bike riders was a Trolley Problem-related consideration. 

You had to choose between which was more or less unfavorable. The only notable difference with the Trolley Problem was that death in this real-world setting was not an absolute certainty. There was a probability of death, and also a probability associated with injuries. Some would contend that this is why the Trolley Problem is irrelevant, due to the thought experiment entailing only sure-death and not a probability of death. That is a rather feeble argument and there are many variations of the Trolley Problem, including the incorporation of probabilities rather than absolute certainties.   

Now that I’ve got you up-to-speed about the Trolley Problem, we can return to the aspects of the microscopic elements of driving decisions and the aggregated macroscopic perspective. 

As notably articulated in a research paper “The Trolley, The Bull Bar, And Why Engineers Should Care About The Ethics Of Autonomous Cars” (authored by Jean-Francois Bonnefon, Azim Shariff, and Iyad Rahwan, published in the Proceedings of the IEEE), there is a Statistical Trolley Dilemma to be considered and ought not to be unheeded: “Alas, ignoring the challenges of autonomous vehicles as explicit ethical agents will only postpone the problem. Even if every action of an autonomous car is oriented toward minimizing the absolute risk of a crash, each action will also shift relative risk from one road user to another. The cars may not be making decisions between outright sacrificing the lives of some to preserve those of others, but they will be making decisions about who is put at marginally more risk of being sacrificed.” 

Upon examining the use case of a driver caught between a large truck and a bike rider, along with implied reference to all sorts of akin driving situations, they saliently point out that: “These are not the dramatic, life and death decisions featured in trolley dilemmas. But once they are aggregated over millions of cars driving billions of miles, these small statistical decisions add up to life and death consequences—and prompt the same questions as the trolley dilemma did.”   

In a manner of speaking, I would assert that the Trolley Problem does apply to the microscopic day-to-day moment-to-moment life-or-death driving decisions that we make continually while at the steering wheel and that furthermore we need to also recognize the Statistical Trolley Dilemma on a macroscopic scale too.   

This latter aspect is assuredly the accumulation of zillions upon zillions of those day-to-day Trolley Problem instances that add up over time and constitute large-scale aggregated patterns of driving behaviors.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

Trying to get human drivers to be contemplative about the Trolley Problem and adjust their driving behavior is a nearly futile, dreamy notion. Humans tend to be resistant to change and have a hard time adopting new driving practices.   

For the semi-autonomous vehicles such as at Level 2 and Level 3, there is an opportunity to have the assisted driving features attempt to aid a human driver by incorporating Trolley Problem related solving capabilities. The automation providing driving assistance could seek to alert human drivers, possibly even overtaking the driving controls. I’ve covered this extensively as a dicey proposition that can lead to a tug-of-war between a human driver and the assisting automation. It is going to be ugly and assuredly a can of worms. 

AI-based true self-driving cars are a different matter since there is no human driver at the wheel. This means that we can expect that the AI driving system will be considering Trolley Problem issues. When a car crash or collision occurs involving a self-driving car, we ought to be able to do a full analysis of what the AI driving system was doing and what decisions it made during the incident at hand.   

On a large-scale basis, we could accumulate these AI driving system aspects into a centralized database that could be used to study what seems to be working well and what seems to be not quite going so well. Some have suggested that we might need to establish an ethics-oriented oversight board that entails examining the programming of AI driving systems and to what degree the Trolley Problem is being addressed (see my column coverage).  

Here’s a final remark to give you something significant to ponder. 

If we could essentially force all self-driving cars to abide by some set of driving rules, such that in the case of being squeezed between say a large truck and a bunch of bike riders, the AI driving system would do as it has been collectively established to do, what would we want that rule to be?  

That’s the kind of decision that some people would skittishly say is above their pay grade. Maybe so, but it certainly ought to not be left to chance by some semi-random choosing AI algorithm, or be in the hands of some overworked AI programmer that while in the throes of coding up the AI made a life-or-death prior decision about how the AI driving system is going to react. 

Those are decidedly-scary defaults. I doubt that any of us would want our lives teetering on the balance by such loosely or poorly determined and altogether careless and mindless proclivities.   

I wouldn’t and neither should you. 

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Assess AI Risk to Prepare for Coming AI Regulations  

By John P. Desmond, AI Trends Editor 

Since the European Commission in April proposed rules and a legal framework in its Artificial Intelligence Act (See AI Trends, April 22, 2021), the US Congress and the Biden Administration have followed with a range of proposals that set the direction for AI regulation.  

“The EC has set the tone for upcoming policy debates with this ambitious new proposal,” stated authors of an update on AI regulations from Gibson Dunn, a law firm headquartered in Los Angeles.  

Unlike the comprehensive legal framework proposed by the European Union, regulatory guidelines for AI in the US are being proposed on an agency-by-agency basis. Developments include the US Innovation and Competition Act of 2021, “sweeping bipartisan R&D and science-policy regulation,” as described by Gibson Dunn, moved rapidly through the Senate.

“While there has been no major shift away from the previous “hands off” regulatory approach at the federal level, we are closely monitoring efforts by the federal government and enforcers such as the FTC to make fairness and transparency central tenets of US AI policy,” the Gibson Dunn update stated.  

Many in the AI community are acknowledging the lead role being taken on AI regulation by the European Commission, and many see it as the inevitable path.  

European Commission’s AI Act Seen as “Reasonable”  

Johan den Haan, CTO, Mendix

“Right now, every forward-thinking enterprise in the world is trying to figure out how to use AI to its advantage. They can’t afford to miss the opportunities AI presents. But they also can’t afford to be on the wrong side of the moral equation or to make mistakes that could jeopardize their business or cause harm to others,” stated Johan den Haan, CTO at Mendix, a company offering a model-driven, low-code approach for building AI systems, writing recently in Forbes.  

Downside risks of AI in business include racial, gender, and other biases in AI algorithms. With humans doing the programming, algorithms are subject to human bias, limitations, and bad intentions. State actors have experimented with ways to influence population behaviors and election outcomes. “AI can enable these manipulations at mass scale,” den Haan stated.   

The proposed EU regulations would require high-risk AI systems to follow requirements regarding datasets, documentation, transparency and human oversight. The EU regulations would ban uses counter to fundamental human rights, including subliminal techniques. And they would ban real-time use by law enforcement of biometric identification systems, such as facial recognition, in publicly accessible space, except for specific cases of crime or terrorism.  

“The proposal seems reasonable,” den Haan stated.  At the same time, the EU regulations propose many uses of AI be subject to minimal transparency requirements, including chatbots and emotion recognition systems. He sees the EU’s approach as an attempt to balance protection and incentive. “It leaves plenty of room for innovation while offering critical basic protections,” den Haan stated. “Hopefully, other countries, including the US—which currently has a patchwork of AI regulations only at the local level—will follow the EU’s lead.”  

Left unchecked, many believe the biases and issues around AI will cause harm and reinforce existing inequities. According to estimates from Garner, by 2022, 85% of AI projects will give inaccurate results due to bias in data, algorithms or the humans managing them, according to a recent account from Toolbox.  

US AI Regulatory Environment Seen as “Complex and Unpredictable”  

While it might be nice to achieve consensus on the risks of AI and how to address them, that is not happening now in the US, where in addition to proposed regulations from Congress, agency regulators are also making proposals. Five financial regulators in March issued a request for information on how financial institutions are using AI services. Shortly after, the Federal Trade Commission issued guidance on what it considers “unfair” use of AI.  And the US National Institute of Standards and Technology (NIST) has proposed an approach for identifying and managing AI bias.  

Ritu Jyoti, Group Vice President, Worldwide AI and Automation Research, IDC

“These activities and more emerging on the national level, combined with efforts by individual states and cities to introduce their own laws and regulations around AI education, research, development, and use, have created an increasingly complex and unpredictable regulatory climate in the U.S.,” stated Ritu Jyoti, Group Vice President, Worldwide AI and Automation Research at IDC, market intelligence analyst firm.  

Others suggest AI regulation can be a matter of degree, with more regulation required where risk is higher. “What is required is an approach to AI regulation that takes the middle ground,” stated Brandon Loudermilk, director of data science and engineering at Spiceworks Ziff Davis. “Taking a flexible, pragmatic approach to AI regulation helps society safeguard the common good in cases where the risk is greatest, while continuing to support general R&D innovation in AI by avoiding extensive regulatory efforts,’’ he stated. 

Advice from HBR on How to Prepare for Coming AI Regulations  

With the coming regulations beginning to take shape, companies using AI in their businesses can begin to prepare, suggests an account in the Harvard Business Review  

“Three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations,” stated Andrew Burt, managing partner of bnh.ai, a boutique law firm focused on AI and analytics, and chief legal officer at Immuta, a company offering universal cloud data access control. 

Specifically, first conduct assessments of AI risks and document how the risks have been minimized or resolved. Regulatory frameworks refer to these types of assessments as “algorithmic impact assessments” or “IA for AI,” the author stated. 

Second, have the risk of the AI be assessed by different technical personnel than those who originally developed it, or hire outside experts to conduct the assessments, to demonstrate that the evaluation is independent.     

“Ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI,” Burt stated.  

Third, put in place a system to continuously review the AI system, after the assessments and independent reviews have been conducted. “AI systems are brittle and subject to high rates of failure, AI risks inevitably grow and change over time — meaning that AI risks are never fully mitigated in practice at a single point in time,” Burt stated.  

Just do all that, and your AI systems will be low-risk and safe for your company.  

Read the source articles and information from Gibson Dunn, in Forbesfrom Toolbox and in the Harvard Business Review 

TinyML Enabling Low-Power Inferencing, Analytics at the Edge 

By John P. Desmond, AI Trends Editor 

Edge computing is booming, with estimates ranging up to $61 billion in value in 2028. While definitions vary, edge computing is about taking compute power out of the data center and bringing it as close as possible to the device where analytics can run.  

The devices can be standalone IoT sensors, drones, or autonomous vehicles. “There’s one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models,” stated George Anadiotis, analyst, engineer and founder of Linked Data Orchestration of Berlin, Germany, working on the intersection of technology, media and data, writing in a recent account in ZDnet.  

However, “There’s just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML.”  

A fast-growing field of machine learning technologies and applications, tiny machine learning (TinyML) is broadly defined as including hardware, algorithms and software capable of performing on-device sensor data analytics at extremely low power. This enables a variety of always-on use-cases, targeting battery-operated devices.  

At the inaugural TinyML EMEA Technical Forum held in June, Anadiotis spoke to several founders of TinyML tech.   

Pete Warden Explores TinyML at Google  

Pete Warden, Staff Research Engineer, Google

Pete Warden was the founder and CTO of startup Jetpac, which built a product to analyze the pixel data of over 140 million photos from Instagram, and turn them into in-depth guides for more than 5,000 cities around the world. The company was acquired by Google in 2014 and Warren has been a Google Staff Engineer since then. He had fit a machine learning model into a two megabyte footprint, at that time the state of the art in efficiency.   

Then he learned some colleagues at Google had a 13 kilobyte machine learning footprint for a system being used to recognize wake words running always on Android devices. That was so the main CPU was not drawing battery power while waiting for the wake word—‘Hey Google’ in this case.  

“That really blew my mind, the fact that you could do something actually really useful in that smaller model,” Warden said to Anadiotis. “And it really got me thinking about all the other applications that might be possible if we can run, especially all these new machine learning, deep learning approaches.”  

Evgeni Gousev came to the US from Russia more than 25 years ago, intending for it to be a short visit. He is still in the US and is today a senior director at Qualcomm, where he has worked for 16 years. Before that, he was a staff member at IBM’s TJ Watson Research Center, and a professor at Rutgers University. He is a PhD graduate of the Moscow Engineering Physics Institute. 

Gousev met Warden in 2018, and learned what was possible using techniques Warden was exploring. Gousev stated, “Peter showed that you can run a deep learning model on 8 bits, without compromising the accuracy much. It was an eye-opening experience for the whole community. It was kind of a mindset shift.”   

The two organized a TinyML session on the Google campus, getting strong interest. They set up the TinyML Foundation, and held the first summit in March 2019, seeing active participation by 90 companies.   

IoT Growth Fueling Demand for TinyML, Says ABI Research  

Market analyst firm ABI recently projected that between 2021 and 2026, the number of IoT connections will reach 23.6 billion, each one representing an opportunity to leverage AI, machine learning and TinyML. The analysts project the TinyML market will grow from 15.2 million shipments in 2020 to 2.5 billion in 2030, according to press release issued by ABI Research. 

The proliferation of AI has fueled the growth of IoT analytics. Data collected from IoT devices is used to train ML models, generating new insights. At the same time, specialized AI chipsets running on edge devices have introduced AI to a range of mobile devices, including automobiles, smart home speakers and wireless cameras.  

Lian Jye Su, AI & ML Principal Analyst, ABI Research

“However, these devices are often too underpowered to make use of all the data flowing across them and struggle to support high-computing performance and high-data throughput, causing latency issues, which is a death knell for AI,” stated Lian Jye Su, AI & ML Principal Analyst at ABI Research.  

TinyML enables data analytics to run on low-powered hardware with low processing power and small memory size, assisted by software designed for small-sized inference workloads. This has the potential to enable edge AI to expand beyond traditional markets.   

“By bringing AI analytics beyond machine vision, more end users can benefit from smart connected sensors and IoT devices based on sound waves, temperature, pressure, vibration, and other data sources,” Su stated. The benefits of this include data privacy, high interconnectivity, and interaction of various components, and overcoming network bandwidth challenges. 

Machine learning is not usually associated with hardware, even though a number of phones and cameras for example, have embedded deep learning models within them. However, embedded AI quickly runs into constraints of power and space, giving TinyML an opportunity, according to an account on the blog of Plug and Play. The company that works to connect startups, corporations, VC firms, universities and government agencies across multiple industries.   

Margot Bagnoli, Electric Power & Natural Gas Analyst McKinsey & Co.

“TinyML brings Machine Learning to the scene by embedding Artificial Intelligence in small pieces of hardware,” stated Margot Bagnoli, who was a venture analyst with Plug and Play when the account was written, and is now an Electric Power & Natural Gas Analyst at McKinsey & Co. “With it, it is possible to leverage deep learning algorithms to train the networks on the devices and shrink their size without the hurdle of sending data to the cloud and, hence, added latency in order to analyze it.”  

She also mentioned Pete Warden, noting that he is a TensorFLow Lite Engineering Lead at Google, and has published a book along with Daniel Situnayake entitled, “TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers”, which has become a reference in the field. 

As of January 2021, the Arduino Nano 33 BLE Sense was the only 32-bit board that supports TensorFlow Lite, making machine learning embedded on hardware accessible to anyone. Arduino collaborates with the startup Edge Impulse to lower power consumption; it supports processing of data at the sensor interface through an inference engine, only sending data when necessary. 

“Ultra-low-power embedded devices are invading our world, and with new embedded machine learning frameworks, they will further enable the proliferation of AI-powered IoT devices,” stated Bagnoli. 

Read the source articles and information in ZDnet, in a press release from ABI Research, on the blog of Plug and Play, and in the book, TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers. 

Leading Business Users of Cloud Services Reap Benefits, Study Says 

By John P. Desmond, AI Trends Editor  

Companies that have committed to cloud computing for IT services have shifted from seeing the primary benefit as increased efficiency, to it being increased revenue and improved profitability, according to a new study.  

Based on a survey of 1,300 global, C-level executives and decision-makers from 11 industries and six countries, the report was conducted by Wipro FulStride Cloud Services, a unit of the global tech information services company Wipro. The FullStride Cloud Services unit was announced in July along with an investment of $1 billion in cloud technologies, capabilities, acquisitions and partnerships over the next three years.   

The report anticipated that the trend of increased benefits from cloud computing will continue as the cloud becomes more intelligent, hyperconnected, and pervasive. The report defined cloud computing leaders as the top 19% of respondents based on cloud maturity, while beginners represented the lowest 32%, figures not based on revenue. 

Rajan Kohli, Partner, Application Services, Wipro

“Cloud is no longer just for cost savings or building agility. Our new report demonstrates that cloud drives high-impact transformation and benefits at every level of the business, including the bottom line,” stated Rajan Kohli, President and Managing Partner, Integrated Digital, Engineering and Application Services, Wipro Limited, in an account in ITP.net.  

Over the next two years, survey respondents expect that cloud will help increase revenue for 59% of firms; improve profitability for 54% and decrease costs for 49%. 

Leading cloud computing user companies are also investing in additional technologies, which helped 21% of leaders achieve a return on investment “significantly above expectations,” compared with 10% of non-leaders, according to the report. The most frequently-employed complementary technology is AI, which 78% of leaders combine with their cloud purchasing, compared to 49% of the non-leaders purchasing AI.   

By 2023, cloud leaders expect their biggest increases in usage of complementary technologies to be in 5G (53% increase in companies using), edge computing (31% increase) and grid computing (29% increase). 

The report also found that 50% of leaders expect to boost their cloud investments for cybersecurity and risk-management over the next two years.  

Cloud Growth Stats Project $800 Billion Market by 2025  

For those who may be skeptical of such lofty projections around the benefits of cloud computing, here are some stats reported in techjury:  

The public cloud computing market will be worth $800 billion by 2025 (Cision); 
By 2024, enterprise cloud spending will make up 14% of IT revenue globally (Gartner); 
Platform as a Service (PaaS) will grow by 26.6% in 2021 (Gartner); 
70% of companies using the cloud plan to increase their budgets in the future (Gartner); 
61% of businesses migrated their workloads to the cloud in 2020 (Flexera); and 
Amazon Web Services (AWS) had a 76% share of the enterprise cloud adoption in 2020 (Flexera). 

In second place in cloud services share is Microsoft’s Azure, with 69% of the market. Google Cloud is in third place with 34%. In the bottom two positions are Alibaba with a seven percent share and IMB with a 15% share, according to the techjury report.  

As new providers get into the cloud computing services business, such as Wipro, the techjury authors expect downward pressure on cloud service pricing.  

Capital One Has Committed to AWS  

A look at selected specific examples of how companies have gravitated to cloud computing is illustrative.   

Chris Nims, SVP of technology, Capital One

Capital One had a private cloud strategy at the beginning of its migration, but after not hitting their numbers, the company made a decision to commit to AWS. Today, Capital One is said to be the first financially-regulated company that runs entirely in the cloud, according to Chris Nims, the bank’s SVP of technology, in an account in Forbes.  

By the end of 2021, the company plans to add more than 3,000 technologists, according to the account.  

To become cloud-first, Capital One recycled 103 tons of copper and steel and built 80% of the nearly 2,000 applications it now runs in the cloud from the ground up. “In 2020, we closed our last data center,” stated Nims. “We have been on the massive digital transformation journey for the past several years to get out of data centers.”  

Moreover, “We are truly all in on the cloud, and AWS has been instrumental in enabling us to take full advantage of the benefits of being in the cloud,” stated Nims. “Going all in on the cloud has enabled both instant provisioning of infrastructure and rapid innovation. We are able to manage data at a much larger scale and unlock the power of machine learning to deliver enhanced customer experiences.”  

More than 75% of this year’s expected 3,000 new hires will be engineers, stated Nims, with expertise in areas such as cloud, data, machine learning (ML), artificial intelligence (AI) and security. The bank decided eight or 10 years ago that to compete, it had to develop its own software.  

Pinterest On the Cloud Since Its Founding in 2010 

Pinterest decided to build the company from its founding in 2010 on cloud services. The company’s current market cap is $35 billion. The company went public in April 2019.  

In financial disclosures in its filing for an IPO, Pinterest reported plans to spend $750 million on AWS by 2023. At the time of the filing, they had spent over $300 million on AWS. Contracts with AWS disclosed in the filing obligated Pinterest to pay AWS the difference if the actual level of spending on the cloud service falls short of the commitment, according to a report on the blog of N2WS. The company supports businesses that commit to the cloud for computing services.  

Pinterest started on AWS in 2010. Their business grew to 200 million active monthly users in 2017; in the second quarter of 2021, Pinterest had 454 million active monthly users worldwide.  

Pinterest first moved to Docker containers, then started using self-hosted Kubernetes for orchestration, according to the N2WS report. This shift allowed them to optimize their cloud infrastructure, simplify their deployment and management processes, and save on infrastructure costs. In addition to using AWS for processing, Pinterest also uses various other services for storage and data analysis, allowing them to put more focus on their product. 

If a company decided to pull out of AWS, it would face a “daunting task,” the authors of the N2WS report stated. “The re-architecting of components would require an enormous amount of work,” the report stated, adding, “It is safe to say that they will avoid such a move at all costs. Migrating data out of AWS would also be so costly that no company is likely to choose that option.” 

Dropbox Dropped Out of AWS 

Dropbox, offering storage and backup services in the cloud, decided to drop AWS in 2017. At the time, Dropbox had 500 million users and 200,000 business customers. The company had to move 50 petabytes—a five followed by 17 zeros—that was stored on AWS servers, according to an account in TechCrunch.  

To support the workload, Dropbox had to build and equip three US data centers, plus assemble the network backbone to enable the needed communication between the US data centers and other facilities located throughout the world. The company made the move to get more control over infrastructure services it viewed as strategic to its business.  

“For us, it was about quality and control and management,” stated Dan Williams, head of product engineering, to TechCrunch. “We know there are solid third parties out there with [high] quality and performance, but we felt ours could be equal or even better because we know the system so well.”  

Read the source articles and information in ITP.net, in techjury, in Forbes and in TechCrunch.