Robot Umpires Invade Baseball; AI That Makes Mistakes on Purpose Could Help 

By John P. Desmond, AI Trends Editor   

Robot umpires are being employed in minor league baseball as an experiment, while a study of strike zones of major league umpires shows a fairly wide variation. 

The Automated Ball-Strike System, which players like to call “robo-umpire,” is being tested in minor league baseball this season, according to an account in The New Yorker . 

Major League Baseball had designed the system and was testing it in eight of nine ballparks at the Low A Southeast League. While the term ”robo-umpire” might suggest a little R2-D2 positioned behind the plate, the MLB decided to have human umpires announce the calls, which were fed to them through an earpiece. A black sensor that looks like a pizza box with a glowing, green eye is positioned behind and above home plate, according to The New Yorker account. 

The pizza-box device is made by the company TrackMan, founded by two Danish brothers, Klaus and Morten Eldrup-Jørgensen, who created it to train golfers. The MLB strike zone is an imaginary box over home plate, 17 inches wide and extending vertically from the batter’s knees to the middle of the chest, just under the armpits. TrackMan’s website states that AI is incorporated into its golf product. 

The first umpires were volunteers who wore top hats, at whom spectators “hurled curses, bottles and all manner of organic and inorganic debris,” according to a paper by the Society for American Baseball Research quoted in The New Yorker account. 

Fred DeJesus, umpire, minor league baseball, Atlantic League

After the game observed by The New Yorker, the writer spoke to the umpire, Fred DeJesus. “There were six calls that I disagreed with,” DeJesus stated, referring to the words that came through his earpiece from the robot. 

The ABS system last year had a three-dimensional strike zone; this season, the zone was defined in two dimensions.  

MLB Umpires Association Cooperating on Robo-Umpires 

The MLB Umpires Association agreed in their latest labor contract to cooperate with use of the ABS system if Commissioner Rob Manfred elects to use the system in the major leagues, according to a recent account from AP News. 

Chris Marinak, chief operations and strategy officer, Major League Baseball

“It’s hard to handicap if, when or how it might be employed at the major league level, because it is a pretty substantial difference from the way the game is called today,” stated Chris Marinak, MLB’s chief operations and strategy officer.  

While MLB tracks the accuracy of ball-strike calls by its umpires, it does not release the figures. However, umpire Angel Herdandez, in a lawsuit filed against the MLB, stated that his accuracy on ball-strike calls increased from 92.19% in 2012 to 96.88% in 2016.  

Players subject to the ABS will be measured before their first game, Marinak stated, and the top of the strike zone will be 56% of their height and the bottom, 28%. The strike zone will be measured in two dimensions at the front of home plate.  

Right now, MLB is trying to get feedback on the ABS, such as on the shape and design of the strike zone. “We have a lot of work to do to decide what is the zone with this automated system,” Marinak stated. “Is it more of an oval-shaped zone, which is more consistent with what’s called today? Is it a square zone? Is it a three-dimensional zone? How does the zone shift from hitter to hitter? Is it literally the zone drawn every single pitch, as is written in the [rule] book, or is it a fixed zone that’s based on your height as a hitter, no matter how much you sort of squat down or stand up?”  

These are serious questions. If AI is involved in assessing the quality of a pitch, the designers of the system will need to decide the shape of the strike zone definitively—oval or square and the number of dimensions.   

Today’s Strike Zone Varies by Umpire  

The strike zone today is subject to the whim of the individual home plate umpire, according to a recent account in The Washington Post  

A study conducted by The Post based on pitch-tracking data from TruMedia and Baseball Prospectus through the games of Aug. 1 showed umpires appear to be squeezing pitchers in 2021. Specifically, pitches that should have been called strikes this season have instead been called balls at a higher rate than ever before.  

So far this season, umpires are calling fewer strikes than at any point since 2008, the first year sophisticated pitch tracking was available, the Post study showed. Data compiled by TruMedia, which provides data analytics tools, visualizations and video scouting tools to professional sports teams, umpires made 11,644 incorrect calls on balls and strikes, in 2020, equal to about 6.5 poor calls per game.   

Some umpires have different strike zones for each team in a game, the Post study showed. 

Given that this study confirms bad umpiring is part of baseball, an AI system involved in the automation of the strike zone should perhaps take this into account.   

Cornell Researchers Working on Fallible AI  

Fortunately, researchers at Cornell University are studying an AI system that better understands that humans make mistakes, according to a recent account in Wired. But instead of being focused on baseball, the AI program named ‘Maia’ is focused on chess, especially on the prediction of human moves, including the mistakes they tend to make.  

Professor Jon Kleinberg, who led the development of Maia, sees it as a first step toward developing AI that better understands human fallibility. He hopes this results in AI that is better at interacting with humans, by teaching, assisting or negotiating with them. 

He chose to focus on chess because it has a track record of having machine intelligence winning out over humans. “It is this sort of ideal system for trying out algorithms,” Kleinberg stated.   

The Cornell team modified existing open source code to create a program that learned by favoring accurate predictions of human moves. It is unusual in how it focuses on finding the most likely move a human would make.  

If MLB could tap the professor’s brain to build an AI system that could preserve the apparent fallibility of its umpires, perhaps that would continue the traditions of the game in the robo-umpire era.  

Read the source articles and information The New Yorker, from AP News from The Washington Post and from Wired. 

Why AI Autonomous Cars Aren’t Yet Paying Attention To Honking Horns   

By Lance Eliot, the AI Trends Insider 

Horn honking. We’ve all done it; some more so than others. It used to be that honking a car horn was quite customary and an expected element in the act of driving. 

Indeed, when I first learned to drive, the driver training class included a brief segment devoted to the use of the car horn. For example, we began by employing the horn in a delicate fashion, such as lightly tapping the horn to generate some casual and modestly alerting toots. We also were later taught to lean on the horn and generate an ear-shattering blast, just in case an outstretched use of the car horn was warranted.   

All told, the notion was that you needed to know how to use your car horn for a wide variety of circumstances. The horn was integral to driving a car. Akin to knowing how to steer, speed up, slow down, and the rest, you likewise should be versed in the use of the car horn. The logic was quite direct, if cars are equipped with car horns, there is assuredly a reason for having them, and any newbie driver should be proficient in using such equipment.   

You might go so far as claiming that the use of a car horn was fine art, if you will. The person using a horn was considered responsible for doing so. They had to sparingly use a horn. This was not a trifling feature on a car. It was a potentially lifesaving device.   

The situations that warranted the use of a car horn were supposed to be relatively serious and not for simply playing around. If another car was dangerously veering into traffic, and it seemed that the driver was not aware of what they were doing, a hefty dose of the horn was considered appropriate. Better to alert the driver and perhaps overstate the case via the sharpness of the horn versus allowing a bad predicament to turn into an injurious car crash. 

Besides the occasions when a horn was used to aid and protect yourself, other cars and even pedestrians, there was leniency to sometimes use the horn for fun. You might be driving along and see your best friend walking on the sidewalk, so you would do a light tap of the horn to get their attention. The friend would smile and wave at you. You would wave back. All in all, an innocent and everyday occurrence. 

Of course, like most things, car horns eventually became used in rather abrasive ways. As they say, that’s what sometimes happens with nifty toys. They get extended into untoward uses that go far beyond what was initially expected.   

One of the most common uses of a horn that is decidedly over-the-top consists of honking a horn at the very instant that a traffic signal goes from red to green, doing so when a car is sitting directly ahead of you and that hasn’t gotten underway just yet. It is one thing to wait a solid handful of seconds and then consider doing a courteous short burst as a reminder that the light is green, while it is something entirely different to use the horn like a massive claxon and browbeat the other driver by stunning them with a bone-breaking blast of sound at the instant a light goes green.   

That’s not cool.   

The thing is, a person using their horn in this aggressive manner is seemingly saying that the other driver is a complete dolt. Wake up, you idiot! Get your car in gear and get the heck going! That’s what it conveys to the person that might have been ready to get underway and merely needed a moment to start doing so. They might consciously be waiting momentarily for the intersection to clear or have some other bona fide safety basis for slowly proceeding ahead.   

Meanwhile, the horn blasting person is boiling mad that the driver has not pushed the pedal to the metal. Maybe this feverish horn blaster is late for work or has been caught behind slowpoke drivers throughout the entire day. For whatever reason, the person armed with a horn decides that it is the best way to motivate others around them. 

Regrettably, this way of thinking often means that they use their horn repeatedly, almost constantly, for any perceived “infraction” by other drivers or pedestrians.   

Of course, the response to getting bleated with a harsh horn blast is not necessarily going to be what the horn user expected.   

Some drivers that get an anxious horn smacking are apt to go into a form of road rage. They might decide to purposely not proceed at all, staying still, doing so to further frustrate the person that has used a horn. Take that, is the thinking of the horn respondent. In turn, the horn user will likely try using the horn again, perhaps more so than the first blast.   

The whole matter cascades out of control. 

Another possibility is that the horn blast causes the driver to look around in anticipation of seeing something that causes them purposely to remain still. In other words, their interpretation of the horn blast was that the driver behind them was helpfully warning them about a danger brewing. Perhaps this horn user saw that a pedestrian had wandered into the street and was trying to make sure that the car getting honked does not mow down the innocent person on foot.   

You might roll your eyes on that one.   

The odds that someone would use their horn as a means to aid others and not due to a semblance of greed would seem like a stretch in today’s world. To be clear, I’m not saying that it never happens, only that the chances of that occurring are relatively slim. A hornblower is usually telling others to pretty much get the heck out of their way, rather than seeking to keep the peace and altruistically aiding others. I realize that seems a bit pessimistic, sorry to say.   

That’s part of the whole problem with using a horn. Like a box of chocolates, you never know what you are going to get (well, more directly, you don’t know why the car horn was utilized).   

The insurmountable problem is that a horn blast has no particular meaning associated with it. The horn does not tell what the horn has been used for. Nobody knows, other than the horn blaster. What might be in their minds is at times an utter mystery. There could be a good reason for the horn being used, or there could be a wild and completely nutty reason for using the horn.   

In addition, you don’t know who the horn is being aimed at. Is it the car immediately in front of the horn honker? Is it somebody standing on the sidewalk? Is it a vehicle barely visible and already down at the end of the block?   

Consider what reactions can occur to the use of this non-specific form of communication. 

I’ve seen some drivers that were blasted with a horn that decided to allow the horn blasting car to proceed ahead of them. The driver then sneakily positions themselves directly behind that car. Once they are in the needed position, they start honking their horn at the other driver. Incessantly. The idea is that, presumably, it makes sense to give that horn user a taste of their own medicine. 

It could be that the original blast of the horn had nothing to do with them. Unfortunately, given that the horn is a scatter-style communiqué, everybody within earshot might think the horn was intended for them. 

The resulting road rage incidents can turn deadly and drag an entire slew of participants into the melee. None of them necessarily know why the horn was used. None of them necessarily know who the intended target was. All they know is that somebody used their car horn and it was irksome that they did so.   

People go to fisticuffs or worse sometimes. 

All because of the use of a car horn. 

In California, the DMV (Department of Motor Vehicles) has this official Vehicle Code verbiage about car horns (see Section 27001): “The driver of a motor vehicle when reasonably necessary to insure safe operation shall give audible warning with his horn,” and furthermore “The horn shall not otherwise be used, except as a theft alarm system.”   

The formal rules emphasize that a car horn is only to be used when reasonably necessary, and otherwise isn’t supposed to be used.   

There is a bit of a loophole in that the interpretation of what is reasonable versus unreasonable is open-ended. Thus, a malcontent might in their noggin believe that using their car horn was reasonably necessary, even though to any independent third party the usage was egregious and utterly unnecessary in the circumstances. This is something that judges and courts often have to decide when such horn squawking cases arrive at the courthouse.   

Within the California DMV Driver’s Handbook, the recommendations are that you should only use the horn when necessary, doing so to avoid collisions. You can also use the horn to get eye contact with other drivers, though presumably once again as a means to avert a collision. And there are allowed special cases such as using a horn on a narrow mountain road (providing a verbal alert for cars that are also using the road but might be hidden from view).   

For those of you that have ever visited New York City, you might remember that the jarring sounds of car horns used to be a well-known part-and-parcel element of being in the city that never sleeps. During the day, you would hear the constant din of honking horns. On top of that, all night long there would be the blaring sounds of car horns too. Cabbies used them extensively. Everyday drivers used them. Truck drivers used their horns. It was an orchestra of grand proportions, though playing essentially one note, over and over again. 

The joke used to be that assuredly this must be the city that never sleeps, due to the fact that the yapping horns kept you up all night. You had no choice but to remain awake. Anyone trying to sleep in their hotel room or apartment had to cope with the never-ending sounds of those ubiquitous and sleep-wakening horns. 

Eventually, enough was enough, and the authorities decided to try and curtail the incessant horn honking. Official traffic signs were posted that warned drivers to be aware that no honking was allowed in the realm of the posted sign. Laws were passed that stated the use of a horn was only allowed for emergency circumstances. A hefty fine was assigned to any tickets that involved the unnecessary use of a horn. Etc. 

Some would say that discouraging the use of car horns is fundamentally wrong.   

If that catches your attention, allow me a moment to explain.   

They would argue that horns can be a vital tool for driving. Imagine all the lives saved or injuries averted by having used a horn as a keystone to achieving safety. If you suppress the desirability of using a car horn, you are going to lose out by making drivers second guess the use of the horn. In turn, when situations arise that somebody would have been saved by the chance of a horn bleating, the driver probably won’t use their horn and the otherwise in-danger person will suffer whatever calamity was awaiting them.   

It is akin to the old expression about tossing the baby out with the bathwater (a piece of wisdom that by today’s standards is a bit rusty). The point being overall that the horn can be used for good, and by making the use of a horn into a bad thing, you are swinging too far and going to fall prey to situations that a horn use would have been crucial.   

That’s one position or perspective about the intrinsic value of using a car horn.   

Here’s another one that might raise your eyebrows. Some have argued that honking your car horn is a First Amendment right. 

You see, a horn could be construed as a form of expression. Perhaps you opt to honk your horn in protest of some matter that concerns you. The car horn is not being used at that moment for traffic or driving purposes. Instead, it is the use of sound as an expression of your angst or declarative statement that you are trying to convey.   

Generally, the courts have not especially given the green light to such a legal posture. You’ve probably heard about the famous notion that your freedom of expression does not necessarily allow you to yell “Fire!” in a crowded theater. To some degree, the use of a horn when in traffic has a similar connotation. This is a complicated legal aspect and something still being bandied about in the courts.   

Whew, that’s a lot about car horns.   

Based on this overall discussion, I would guess that you’ve likely come to the conclusion that car horns have many useful and yet also nebulous facets. 

The horn itself is not especially the issue. It is those that choose to use the horn. Sure, we could remove all horns from all vehicles, dispensing with the topic entirely, but this would undeniably excise the chances of using a horn for the revered reasonable and contributory purpose it is intended.   

It is relatively safe to say that car horns will remain a standard feature of cars.   

Meanwhile, consider that the future of cars consists of AI-based true self-driving cars. 

There isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, and nor is there a provision for a human to drive the vehicle.   

Here’s an intriguing question that is worth pondering: What will happen if you opt to use your horn toward an AI-based true self-driving car that might be nearby you while underway via your conventional human-driven car?   

Before jumping into the details, I’d like to further clarify what is meant when I refer to true self-driving cars.   

For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/   

Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/   

For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/   

For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/   

Understanding The Levels Of Self-Driving Cars 

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. 

These driverless vehicles are considered Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).   

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.   

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend).   

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different from driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). 

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.   

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.   

For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/   

To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/   

The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/   

Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/   

Self-Driving Cars And Some Honking Going On   

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving.   

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.   

Why this added emphasis about the AI not being sentient?   

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet. 

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.   

Let’s dive into the myriad of aspects that come to play on this topic.   

The question to be pondered here is what will happen if you honk a horn at an AI self-driving car.   

Turns out, I am around self-driving cars quite a bit and witness people that do indeed honk their horn at a self-driving car. But before I delve into those details, we can consider a somewhat allied question.   

If a tree falls in a forest, will it make a sound? 

You’ve certainly heard that question before. Seems like an age-old conundrum.   

The answer to that question is relatively straightforward, plus it provides insights into what will happen when someone honks at a self-driving car. Those two topics seem to be completely disconnected, but you’ll see in a moment that they are similar in key respects. 

When a tree falls in a forest, there will undoubtedly be vibrations or waves that carry through the air. Anyone or presumably anything that can sense the vibrations and has apparatus known generally as the act of hearing will ostensibly hear the crashing sounds. If you define the word “sound” as referring solely to the vibrations and ergo omit the need for a hearing mechanism, you could claim that the fallen tree did make a sound, regardless of whether anyone or anything was nearby to hear it.   

Meanwhile, if you insist that “sound” must encompass a hearing mechanism, and suppose there aren’t any hearing mechanisms within earshot, you could contend that the falling tree did not make a sound. In that case, when there aren’t for example any people or animals around that have the sense of hearing, it is feasible to argue that there wasn’t any sound associated with the crashing tree.   

Hope that straightens things out.   

This is directly related to a self-driving car via the notion of whether a self-driving car has any provision for detecting sounds that emanate from outside the autonomous vehicle. Some self-driving cars are not yet equipped with externally facing microphones. Thus, the self-driving car is unable to detect sounds coming from the streets such as the engine noises of other cars, the chatter of nearby pedestrians, and nor the sound of a honking horn.

Without some sensory device intended for detecting sounds, the AI driving system is not going to have any tangible input that a horn is being honked by someone. Note that there might be microphones within the interior of the self-driving car, often established to allow for interaction with passengers, though these microphones would only tangentially pick up external sounds, if at all.   

In short, for the case of self-driving cars that are not set up with externally-oriented microphones, the honking of a horn at a self-driving car is futile and will not be detected. I suppose you might conceive of this as akin to having a human driver at the wheel that is unable to hear anything that originates outside of the vehicle, perhaps because of their lack of hearing or due to the vehicle being so airtight that no sounds can penetrate the interior.   

One concern about self-driving cars not being equipped with external-facing microphones entails the lack of being able to detect that an emergency vehicle might be nearby. Human drivers are expected to listen for ambulances, fire trucks, police cars, and other emergency vehicles that use their sirens to forewarn other drivers of their presence. An AI driving system would not viably be able to do this type of detection without an audio input capacity, other than scanning visually via the video cameras or using radar, LIDAR, etc.   

Another qualm is that AI driving systems are potentially unable to detect any spoken commands or shouting that might occur by people around or near a self-driving car. A pedestrian might seek to yell loudly at a self-driving car to watch out for a child up ahead, yet the AI driving system might not detect such a warning due to a lack of outward-mounted microphones.   

In the case of self-driving cars that have a human serving as a backup safety driver, there is a chance that this human might hear a honking horn (and other external sounds too, naturally). You see, many of the existing self-driving car tryouts on our public roadways have a person employed and sitting at the steering wheel as a monitor of the AI driving system. The human is supposed to disengage the AI and take over the driving controls when needed. 

We, therefore, have two variations about the hearing of a honking horn. One is that the self-driving car won’t detect the horn since there aren’t any audio devices provided for that use. The other is that a human backup driver might be in the vehicle and potentially hear the honking horn, for which the driver might then take over the driving. 

Those are the most likely aspects currently.   

The third variant is a self-driving car that is equipped with externally facing microphones. This suggests that the AI driving system will possibly detect the honking horn. The key will be whether the AI developers have programmed the AI driving system to take as input the audio detections, along with having some form of pattern detection to ascertain that the sound is in fact a honking horn (there obviously can be a lot of street-related sounds detected during a driving journey). 

Imagine that you are an AI developer and you need to program the AI driving system to contend with what appears to be the sound of a honking horn. What would you have the AI driving system do upon its detection of a horn sound?   

That’s a tough one.   

Recall that we’ve already covered the vacuous nature of a honking horn. A human that is honking a horn might be doing so for innumerable reasons. Furthermore, the horn honking might have nothing whatsoever to do with the self-driving car. A person might be honking at other cars, or pedestrians, or at a duck that was waddling next to a pond that is adjacent to the roadway.   

Human drivers oftentimes have no semblance of why someone honked their horn. Figuring out whether there was a bona fide roadway safety concern requires looking around extensively and trying to ferret out what might be the basis for the honked horn. This analysis of the driving scene might turn up empty as a revealing indicator for the horn being used.   

For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/ 

On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/ 

I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/ 

Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/ 

Conclusion 

There are a lot more twists and turns on this seemingly simple topic.   

Via observation, I’ve ascertained that some human drivers honk their horns at self-driving cars because they are trying to declare that hey, this is neat, it’s a self-driving car. This is akin to seeing something unusual on the roadway and honking just to point out the extraordinary appearance of the object or thing. Of course, this is generally an improper and illegal use of a car horn, assuming that the circumstance does not involve the pending possibility of a collision or equivalent.   

In addition, some smarmy human drivers believe they are being clever by honking at a self-driving car, as though this is a means of testing the capabilities of the AI driving system. Please don’t do this. Once again, it would usually be considered an illegal use of a car horn.   

Then there are the sincere cases. A human driver might genuinely be using their car horn upon noticing that the self-driving car appears to be heading toward a collision. The hope presumably is that the AI driving system will detect the sound and take evasive action, or that a human backup driver is present and will take over the driving controls.   

Lamentably, dealing with horn honking is considered an edge or corner case for many of the existing self-driving car development efforts. An edge or corner case is something that is not considered at the core of the task at hand. The item is placed onto a list of lower priority considerations and will someday get attention (some also assert it will be negated entirely and replaced by electronic transmissions such as V2V and V2P).   

As mentioned, honking your car horn is like a box of chocolates. Besides not knowing what you will get, there is also the consideration of whether any falling trees in the forest will be heard. 

That’s for darned honking sure. 

 

Copyright 2021 Dr. Lance Eliot  

http://ai-selfdriving-cars.libsyn.com/website 

Underspecification Challenging Machine Learning Modeling 

By John P. Desmond, AI Trends Editor 

The three little bears strived to get it just right, and AI model builders strive to do the same thing when it comes to specifying their model. Underspecification is when you build a model that performs well on your data, but so do other models, which could lead to your model decaying over time.   

The discussion of underspecification kicked off last fall when Google researchers published a paper on the subject, “Underspecification Presents Challenges for Credibility in Modern Machine Learning.”   

“ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures,” stated the paper, put together by a group of scientists led by author Alexander D’Amour, a research scientist with Google Brain of Cambridge, Mass.  

Matt Brems, Senior Manager, Data Science Product & Strategy, Roboflow

In an interpretation of the paper written by Matt Brems, Senior Manager, Data Science Product & Strategy at Roboflow, a company focused on software to help in tagging of objects in images and video, describes how a developer builds a model with a good mean average precision (mAP) score for accuracy. “Then you deploy your model, and your model does worse than expected,” Brems states in a blog entry. What happened?  

The developer followed all the right steps: split the data into training, validation, and testing sets; thoroughly cleaned the data; and made sure the engineering steps were rooted in subject-matter expertise.  

The problem, very likely, was underspecification, meaning the same performance can be realized on the selected data set in many different ways. “This is a problem with all sorts of machine learning models, including in computer vision, natural language processing, and more,” the author states.  

What is the upshot? Said another way, “The computer doesn’t know which model is a better reflection of reality—just which model happened to be better on this specific set of data,” Brems stated.  

Find the Line of Best Fit? Or Detect the Object  

Middle school math students are sometimes asked to find the line of best fit in a scatter plot on a two-dimensional grid. The AI engineer needs to see this as more of an object detection problem, and the choices that go into modeling it are intentional and unintentional. Intentional choices might include the types of image preprocessing the model-builder performs, or the amount of data he/she collects. An unintentional choice might include the random seed selected when fitting the model or algorithm to the data. (A random seed is a number at the start of a number generator.) 

“At the end of the day, a model is a simplification of reality,” Brems stated. “Models are generally supposed to reflect or mimic the real world. But the way that models are fit, there’s no guarantee that your computer selects a model that reflects the logic or science of your specific application.”  

The result of this can be, “When you deploy your very accurate, high-performing-on-the-test-set model, there’s a good chance your model immediately starts performing poorly in the real world, costing you time, money, and frustration,” Brems stated. 

Among his suggestions for AI modelers: “Draw your testing data from somewhere other than the training distribution—ideally mirroring your deployment environment;” and use stress tests to detect underspecification; and “make sure your machine learning pipeline is reproducible.”  

Sees Challenge to Current Approach to Machine Learning Models 

Collins Ayuya, graduate student in computer science

Similar advice was offered on the blog of Section, a company offering edge-as-a-service, to help developers create workloads to run on a distributed edge. Referring to the paper from the Google researchers, the entry’s author, Collins Ayuya, who is a student in a masters in computer science program, stated, “The way we approach the training of machine learning models has been challenged.”  

He explains a faulty approach this way: “Underspecification implies that the training phase of a model can produce a good model. It can also produce a flawed model, and it would not tell the difference. As a result, we wouldn’t either.”  

Why is this a problem? “Underspecification threatens the credibility of models. The reliability of the process used to train models to perform in the real world as they do in training has been called into question,” Ayuya states. 

He also had suggestions for addressing the issue; here is a selection briefly summarized: limit the model’s complexity; understand the proposed real-world application of a model; “Consulting domain experts could help do this;” conduct three types of stress tests: stratified performance evaluations, shifted performance evaluations and “contrastive” evaluations. The latter play a part in explainable AI, Ayuya noted. 

An article entitled, “ How to Write the Specification for Your AI Project” from the blog of Apro Software compares an “ordinary” software development project to an AI project. While there are many similarities, “We would like to draw your attention to the most important differences,” stated the author of the post, Konstantin Voicehovsky, a technical manager at Apro, a software development service provider based in the Netherlands and Belarus. 

Among the differences: “Data is an essential part of AI projects, and you have to add some information about it in the specification;” and, “AI projects are very similar to scientific projects: they both are based on hypotheses.”  

He offered an example of an AI system to classify paintings by the style of art: classicism, pop-art, Cubism, and so on. If the system is trained on black-and-white images, it is unlikely to be able to work with colored images. “This issue needs just one line of code to be fixed,” Voicehovsky stated. “But a lot of projects cannot be changed so simply.”  

Scientific projects and AI projects have much in common since both are based on hypotheses, he suggested. “AI engineers play the role of scientists who must prove the project hypothesis,” he stated.  

Again comparing, he stated, “For ordinary software, goals are always (or almost always) achievable and straightforward. No place for hypothetical stuff!” 

Read the source articles and information in the paper from Google researchers, “Underspecification Presents Challenges for Credibility in Modern Machine Learning,” in a blog post from Roboflow, in a  blog post from Section.io, and in a blog post from Apro Software. 

Executive Interview: Dr. Matt Gaston, CMU Software Engineering Institute  

Defining the AI Engineering Discipline 

Dr. Matt Gaston, director, SEI’s AI engineering division

Dr. Matt Gaston of the Carnegie Mellon University Software Engineering Institute has been involved in Applied AI, machine learning and national security leadership for many years. The founding director of the SEI Emerging Technology Center, he was recently named the first director of the SEI’s AI Division. Among his positions prior to CMU, Dr. Gaston worked for 10 years at the National Security Agency. He received his PhD in computer science from the University of Maryland Baltimore County, and an undergraduate degree in mathematics from the University of Notre Dame.  

At the AI World Government event Oct. 18-19, Dr. Gaston will be speaking on AI Engineering, a discipline to guide the development of scalable, robust and human-centered AI systems. He recently took a few minutes to talk to AI Trends Editor John P. Desmond about the work. 

AI Trends: The DOD in 1984 first sponsored the Software Engineering Institute to bring engineering discipline to software development. This new AI Division is intended to study the engineering aspects of AI design and implementation. I wondered if the work of the division will have an applied AI focus.  

Matt Gaston: Yes. The reason we’ve established this new AI Division at the Software Engineering Institute is exactly based on this history. In 1984, the DOD created the Software Engineering Institute to help the department do software as well as software could be done. Now with the major focus on AI and the rapid push to adopt and deploy these types of technologies because of the promise and power that they hold, constituent parts of the DOD are looking to the SEI to now help them figure out how to do AI as well as AI can be done.  

To your question about an applied focus, we think it’s critically important that if we’re going to be a leader in helping organizations understand how to build reliable, responsible, trustworthy AI systems, that we must also be doing some of that ourselves. So there’s most certainly an applied focus. I would say even the focus on AI engineering is really about how to apply these types of technologies in a smart, reliable, and responsible way.  

Will you still have the software development life cycle, with specific stages along the way? Does the SDLC work for the development of AI systems, or do we need something new?  

It’s somewhere in between. The traditional software development life cycle is most certainly relevant to what is needed in AI software and AI systems. But of course, adopting AI into systems poses some challenges to the traditional software development life cycle in some very important ways. First is the role of data. In modern AI systems, which are largely driven by modern machine learning, the behavior of the machine learning models that are produced is driven by the data that those systems are trained on. The importance of data in deriving system behavior is new and introduces new challenges. These include how to manage the data, how to know you have an appropriate data set, how to version control large datasets, and how to clean up and make a data set more robust. So the role of data is critically important.  

A second big challenge is the role of uncertainty. Almost all the modern AI techniques have some notion of probability or probabilistic behavior inside them. So that has to become a first-class concept in thinking about the software system as a whole and how you manage the software development life cycle, and handle that uncertainty across the development path.  

Then lastly, we need to have some interesting conversations about how these systems interact with their human counterparts. These systems are almost always going to be working for or with humans, providing information, providing decision support to a human counterpart. And the way AI capabilities work, they introduce new questions about how to make the connections between the humans and the systems transparent, understandable and usable in any variety of ways. So those are three examples of how AI and AI systems challenge and expand the traditional software development life cycle.  

What would you say are the key characteristics of AI engineering? 

Based on lots of feedback and input that we’ve collected and received from our stakeholders, from government organizations, but also industry collaborators and academic partners here at Carnegie Mellon University, and elsewhere across the country and the world in some cases, we’ve identified currently three pillars of AI engineering. Those three pillars are scalable AI, robust and secure AI, and human centered AI. I’ll say just another sentence or two about each of those.  

Scalable AI is about how to scale AI technologies up to the size, speed and complexity of the application space. For our Department of Defense stakeholders, it’s the mission space. But it’s not just scaling up, there’s also scaling out. How do you make the development and adoption of these AI technologies possible at the enterprise scale, again, in a responsible and reliable way? Also, for particular applications in the commercial world as well as the government and defense sector, how do you scale capabilities down? A lot of modern AI techniques require lots and lots of compute. In some cases we want them to work in a form factor that’s maybe really, really small, and there’s some interesting engineering challenges in doing so, and also some science that is needed to make that happen. So that’s scalable AI.  

Robust and secure AI is about test and evaluation. How do we build AI systems, machine learning models that are provably, or at least testably robust to various considerations—security but also uncertainty—and have appropriately calibrated confidence levels? So the robust and secure AI is really about test and evaluation and knowing that these systems are going to behave the way we expect them to behave.  

The third pillar is human-centered AI. That’s all about how these systems interact with their human counterparts, in what we might call human-machine interaction or teaming. That is, how humans can better understand and interpret both the function of these systems, and the outputs of these systems. Then there is a whole collection of policy and ethics considerations that we include in that pillar of human-centered AI.  

For the Department of Defense, what kind of AI work does the Carnegie Mellon University Software Engineering Institute get involved with?  

We have established the new AI Division at the Software Engineering Institute in response to a demand signal that we were hearing from senior leaders across the Department [of Defense] on the need for an AI engineering discipline. To establish that discipline, it is important to work on some applications, to build AI capabilities. We have ongoing work, and we have work that we plan to do in important mission applications.  

In command and control, for example, we see a huge opportunity to increase situational awareness, to provide the right information to decision makers. There are great opportunities in sensing and sensor processing. Also, logistics is an area where AI could have a huge impact. And I see some emerging application domains from a defense and national security perspective, such as the increasing importance of the implications of climate change on defense and national security. I am not aware of an enormous amount of work in that area; I see a huge possibility for applying AI technologies for good, in understanding those types of concerns.  

How will AI engineering address cybersecurity?  

The Software Engineering Institute has a long history of work and contributions in the space of secure and safe software systems, as well as cybersecurity and cybersecurity engineering writ large. So we want to build on all that experience and legacy of great work, and bring that type of thinking and knowledge and experience to the new challenges that AI presents from a security perspective. So in that regard, it’s pretty well-known that modern machine learning systems can be manipulated in multiple different ways.  

So how can a machine learning system be manipulated?  

I really like the taxonomy offered by John Beieler, who is the director of Science and Technology at the Office of the Director of National Intelligence. He boils it down to three categories. First, modern machine learning systems, for the most part deep learning systems, can be manipulated to learn the wrong thing at training time. Second, they can be manipulated to do the wrong thing. So at inference time or decision time inputs can be modified so that a modern machine learning system, a deep neural network, makes the wrong prediction, so it does the wrong thing. Then the third category is that they could reveal the wrong thing. It’s possible to probe a deep neural network, a machine learning model, out there in deployment to extract the information or the data that was used to train that model.  

Lots of detail is behind this, with many different paths to wander down in each of these categories of manipulations, but learn the wrong thing, do the wrong thing and reveal the wrong thing are the three big categories of how they can be manipulated.  

How will AI engineering consider the ethical use of AI? 

Ethics is a core consideration in our pillar that we call human-centered AI. It’s well known that the Department of Defense has adopted and published the ethical AI principles they are working toward. We want our work in AI engineering to be responsive to those ethical AI principles, and other organizations in the intelligence community have similar principles.   

From an AI engineering perspective, we are interested in thinking about ethics upfront in the design of AI systems, and then building in engineering mechanisms that can help to measure and monitor the ethical concerns system developers and users might have about how these systems are used. We see a great opportunity to make ethics a core consideration in the engineering discipline associated with AI.  

Estimates vary as to the percentage, but what is the primary reason that so many AI projects do not make it into production? Why do so many projects fail?  

I will point to a reference here, a great dataset that is run or operated by the Partnership on AI, called the AI Incidents Database. Just recently, the Center for Security and Emerging Technology at Georgetown [University] did an analysis of the incidents in that database. They identified three core reasons why AI projects fail.  

The first is specification. That means the system behavior that was built was not aligned with the actual intent. In other words, the requirements or the needs statement for that system did not capture what was intended to be built. It’s well-known that specification is hard in modern machine learning systems and in AI systems generally. [Ed. Note: Learn more at the blog of Roboflow, “Google Researches Say Underspecification is Ruining Your Model Performance.]  

The second big area is robustness. This means the system either was not or could not be tested in a way that would guarantee its appropriate behavior once deployed. This is a known challenge in AI systems. Major investments are being made in industry and in the government on test and evaluation. It’s hard. It’s really hard to test for the right things at system development time, pre-deployment. Environments change in the wild as well.  

One concept that we’re working on in this area is what we call “beyond accuracy.” All too often, especially when it comes to machine learning, model accuracy is evaluated, which is how well the machine learning model performed a specific task. Let’s call it a classification task. But that may have not been the mission application or the operational application of that model. Many good examples of this are out there.  

The third key area where it’s been shown that AI systems fail is in assurance. That means that the appropriate mechanisms to monitor the system in operation, were not there. There were no mechanisms to detect when the system might degrade in performance, or when things in the environment have changed such that the system behavior is no longer what is intended. So to recap, the three primary reasons, according to both CSET at Georgetown and the AI Incidents Database, are specification, robustness, and assurance.  

Among the top technology trends for 2021, Gartner analysts included AI engineering, saying it will bring together various disciplines and offer a more clear path to value. Do you see it that way? And if so, what disciplines will be brought together in the AI Division at CMU’s Software Engineering Institute?  

I do see it that way. I think it’s exactly right. We’ve learned a lot in all of these other engineering disciplines and their development over decades, or even centuries in some cases. So I think there is a lot to bring together to support AI engineering. The fields that come readily to mind, obviously software engineering. I am part of the Software Engineering Institute. So we will draw on ideas and lessons and practices from software engineering. Systems engineering, taking a broader context, is also critically important. Computer science itself. There are theoretical level considerations when thinking about how to make these systems robust and reliable and understandable. And there are great opportunities in other traditional engineering disciplines, such as civil or industrial engineering. We will draw on ideas and inspiration there to make sure we’re asking the right questions and providing the right tools to make these systems reliable. 

Another field that comes to mind is what I would call human-centered design. There is a lot of work out there in industry and in design schools on how to design systems around the needs of humans, with humans in consideration as part of the system. That is critically important for AI engineering.  

Also, our overall approach to AI engineering is very much community-based. With that, we’re trying to be very open and take a very inclusive view of where we might learn key insights or leading practices on how to build these systems and test these systems and deploy these systems. So we’re very open to being surprised by insights that come from unexpected fields or disciplines. So it’s exactly right to think about AI engineering drawing from a wide variety of different fields.  

Earlier this year, CMU’s Software Engineering Institute announced a National AI Engineering Initiative aimed at growing the AI engineering discipline, that would encourage partners who would conduct research and fund operations to join. How’s that going? 

To be honest, it started maybe a little slower than we had hoped, but I’d say it’s going very well. We have several key sponsors of that initiative at this point. Even more key stakeholders, people that are advising and guiding what we’re doing in that regard, and a growing set of formal partners that are part of our community-based approach to establishing an AI engineering discipline. 

Also, we have observed much more volume in the conversation about the importance of AI engineering. When we started our push into AI engineering two-and-a-half years ago, we heard a lot more talk about pursuing AI capabilities as fast as possible. That dialogue is still going on, but there’s much more consideration for, “Wait a minute. How do we do this in a smart and responsible and reliable way?” And we’re really excited that we just recently had a proposal accepted for a AAAI [Association for the Advancement of Artificial Intelligence] Spring Symposium [March 2022] on the topic of AI engineering aligned with our three pillars of AI engineering. So we’re really looking forward to bringing a much broader, somewhat academic-leaning community together in March of 2022.  

Good for you. What do you see as the role of certificate programs such as those offered by Coursera in the AI education landscape? 

Maybe I’m answering a more general question, but I think these are incredibly valuable resources, not just the certificate programs, but all the available learning and training programs out there.   

I do find we have an opportunity to significantly increase what I would call AI literacy, including how to build AI systems and knowing what the right questions are to ask in going about building an AI system. That’s largely at the individual level, but we also see an organizational readiness component.  

Part of the activities at the AI Division of the Software Engineering Institute, is work in digital transformation. What I mean by digital transformation is helping organizations and the individuals within those organizations better prepare to take on AI capabilities, to incorporate AI capabilities into their workflow, into their mission flow, and know how to do that—again I use these same words over and over—in a smart, reliable and responsible way. We also see a great opportunity for workforce development activities, augmenting what’s publicly available through Coursera or other offerings, with executive education, professional education experiences. We also have a great partnership between the Software Engineering Institute at CMU and the academic units at CMU, like the College of Engineering and in the School of Computer Science.  

On a different topic, what suggestions do you have for middle and high school students who might be interested in computers and technology, possibly AI?  

We are seeing that curriculums in K-12 education are starting to take on programs such as computational thinking or computer science, maybe even an introduction to AI course. Students should take advantage of those. Also, one thing that is exciting in AI is that there are so many open innovation challenges in AI, such as through Kaggle [an online community of data scientists and machine learning practitioners operated by Google]. 

So middle and high school students could get out there and participate in these challenges and get their hands dirty, trying to build some of these systems. I think this is just a really great opportunity regardless of whether the challenge is won or not, just a great experience to try to build some of these things. Then going beyond that, if you want to get beyond K-12, many colleges are starting to offer their courses online and in some cases for free. So that’s a way for students that are really interested and have gone as far as they can on their own to start to dig into some details of computer science and AI and related ideas.  

What is your favorite application of AI today?  

It’s a hard question; I see so many great applications of AI, and frankly, so many great opportunities to apply AI for good on many different problems and in many different domains. But one area that I’m particularly excited about is work that I’ve seen in humanitarian aid and disaster relief. 

We’ve done work in this space, but I’ll just talk about it generally. Based on commercially-available and inexpensive data from commercial satellites, we have seen a huge opportunity in recent years to analyze the planet. That includes satellite imagery to understand wildfires, including fire line prediction or planning. Or it could be automated building damage assessment from natural disasters.   

We see a great confluence of data availability, computational power and AI capabilities. In a very real way, these types of applications can have a huge impact on reducing costs, lowering risks and ultimately saving lives.   

Learn more at the Software Engineering Institute. 

UN Human Rights Commission Calls for Moratorium on Sale of AI Tech 

By AI Trends Staff 

The United National Human Rights Office of the High Commissioner this week called for a moratorium on the sale and use of AI technology that poses human rights risks—including the use of facial recognition software—until adequate safeguards are in place.  

Michelle Bachelet, UN High Commissioner for Human Rights

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” stated Michelle Bachelet, the UN High Commissioner for Human Rights, in a press release  

Bachelet’s warnings accompany a report released by the UN Human Rights Office analyzing how AI systems affect people’s right to privacy—as well as rights to health, education, freedom of movement and more. The full report entitled, “The right to privacy in the digital age,” can be found here. 

“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” Bachelet stated. “AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online.”  

Digital rights advocacy groups welcomed the recommendations from the international body. Evan Greer, the director of the nonprofit advocacy group Fight for the Future, stated that the report further proves the “existential threat” posed by this emerging technology, according to an account from ABC News. 

“This report echoes the growing consensus among technology and human rights experts around the world: artificial intelligence powered surveillance systems like facial recognition pose an existential threat to the future [of] human liberty,” Greer stated. “Like nuclear or biological weapons, technology like this has such an enormous potential for harm that it cannot be effectively regulated, it must be banned.”  

While the report did not cite specific software, it called for countries to ban any AI applications that “cannot be operated in compliance with international human rights law.” More specifically, the report called for a moratorium on the use of remote biometric recognition technologies in public spaces—at least until authorities can demonstrate compliance with privacy and data protection standards and the absence of discriminatory or accuracy issues.  

The report was also critical of the lack of transparency around the implementation of many AI systems, and how their reliance on large datasets can result in people’s data being collected and analyzed in opaque ways, and can result in faulty or discriminatory decisions, according to the ABC account. The long-term storage of data and how it could be used in the future is also unknown and a cause for concern, according to the report. 

“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet stated. “We cannot afford to continue playing catch-up regarding AI—allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact.” Bachelet called for immediate action to put “human rights guardrails on the use of AI.”  

Report Announced in Geneva  

Peggy Hicks, Director of Thematic Engagement , UN rights office

Journalists were present at the announcement of the report in Geneva. “This is not about not having AI,” stated Peggy Hicks, director of thematic engagement for the UN rights office, in an account in Time. “It’s about recognizing that if AI is going to be used in these human rights—very critical—function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”  

The report also expresses caution about tools that try to deduce people’s emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations, and lacks scientific basis.  

“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty, and to a fair trial,” the report states.  

The report’s recommendations are consistent with concerns raised by many political leaders in Western democracies; European regulators have already taken steps to rein in the riskiest AI applications. Proposed regulations outlined by European Union officials this year would ban some uses of AI, such as real-time scanning of facial features, and tightly control others that could threaten people’s safety or rights.  

Western countries have been at the forefront of expressing concerns about the discriminatory use of AI. “If you think about the ways that AI could be used in a discriminatory fashion, or to further strengthen discriminatory tendencies, it is pretty scary,” stated US Commerce Secretary Gina Raimondo during a virtual conference in June, quoted in the Time account. “We have to make sure we don’t let that happen.”  

At the same conference, Margrethe Vestager, the European Commission’s executive vice president for the digital age, suggested some AI uses should be off-limits completely in “democracies like ours.” She cited social scoring, which can close off someone’s privileges in society, and the “broad, blanket use of remote biometric identification in public space.”  

Consistency in Cautions Issued Around the World  

The report did not single out any countries by name, but AI technologies in some places around the world have caused alarm over human rights in recent years, according to an account in The Washington Post 

The government of China, for example, has been criticized for conducting mass surveillance that uses AI technology in the Xinjiang region, where the Chinese Communist Party has sought to assimilate the mainly Muslim Uyghur ethnic minority group.  

The Chinese tech giant Huawei tested AI systems, using facial recognition technology, that would send automated “Uyghur alarms” to police once a camera detected a member of the minority group, The Washington Post reported last year. Huawei responded that the language used to describe the capability had been “completely unacceptable,” yet the company had advertised ethnicity-tracking efforts.  

Bachelet of the UN was critical of technology that can enable authorities to systematically identify and track individuals in public spaces, affecting rights to freedom of expression, and of peaceful assembly and movement.  

In Myanmar this year, Human Rights Watch criticized the Myanmar military junta’s use of a public camera system, provided by Huawei, that used facial and license plate recognition to alert the government to individuals on a “wanted list.”  

In the US, facial recognition has attracted some local regulation. The city of Portland, Ore., last September passed a broad ban on facial recognition technology, including uses by local police. Amnesty International this spring launched the “Ban the Scan” initiative to prohibit the use of facial recognition by New York City government agencies. 

Read the source articles and information in press release from the UN Human Rights Office, read the report entitled, “The right to privacy in the digital age,” here; from ABC News, in Time and in The Washington Post 

Manufacturing Analytics: 14 Case Studies & Success Stories

Manufacturing is one of the top 3 industries which account for the biggest share of the big data and analytics revenues worldwide. Manufacturing analytics have numerous use cases which enable…

The post Manufacturing Analytics: 14 Case Studies & Success Stories first appeared on AIMultiple: High Tech Use Cases & Tools to Grow Your Business.

Conversational AI Making Headway in Powerful Healthcare Chatbots 

By John P. Desmond, AI Trends Editor  

Conversational AI has come a long way since ELIZA, which was intended by its creator in 1964 to be a parody of the responses of a psychotherapist to his patient, as a demonstration that communication between a human and a machine could only be superficial.  

What surprised Joseph Weizenbaum of the MIT AI lab was that many people, including his secretary, assigned human-like feelings to the computer program. It is acknowledged as the original chatbot.   

Pranay Jain, cofounder and CEO, Enterprise Bot

In the 50 years since then, chatbots have evolved first to engage users in dialogues for customer service in many fields, and now to dialogues on personal medication information. “With the advent of cognitive intelligence, chatbots were given a facelift. They were able to analyze context, process intent, and formulate adequate responses,” stated Pranay Jain, cofounder and CEO of Enterprise Bot, in a contribution to AI Trends. The Switzerland-based company was founded five years ago.   

Still, chatbots incorporating AI today are challenged to successfully process technical commands, to understand human intent, to exhibit conversational intelligence and understand different languages, accents and dialects.   

Today, “The ability to understand the subtle nuances of human tonalities, speech patterns, and mimic human empathy in the form of texts and voices is what makes a chatbot truly successful across industries and verticals,” Jain stated.   

Chatbots in healthcare had been perceived as high risk, with healthcare professionals skeptical that patients would provide confidential medical information to a virtual assistant. “Today, chatbots are being designed and deployed to perform preliminary pathology and aid healthcare professionals,” Jain stated, noting that chatbots now gather initial personal information and then ask about symptoms.   

Cloud Vendors Now Into Serving Medical Markets  

Top public cloud vendors have developed technology to enhance patient and provider experiences, from diagnostics to appointment management to transcribing the notes of doctors.   

For example, in December 2019, Amazon Web Services launched Transcribe Medical, a speech-to-text service enhanced with a medical vocabulary. This complements AWS Comprehend Medical, a fact extraction engine built to identify and code key clinical elements in text, according to a recent account from CB Insights. 

In a similar way, Google’s Healthcare Natural Language API uses AI to identify and code clinical facts in written medical text. Meanwhile, with its $19.7B pending acquisition of Nuance, Microsoft now offers a full suite of voice-based patient engagement and provider documentation solutions. “These services are igniting a wave of innovation among startups, which are developing on top of the cloud-based offerings,” stated the CB Insights account.  

The Dragon Ambient eXperience product offering from Nuance is used in healthcare settings, especially for transcribing doctor’s notes based on spoken conversations. It uses ambient sensing technology to listen to the conversations and offers some workflow and knowledge automation to complement the electronic health record of the patient. 

“It automatically generates the notes that doctors need to write using natural language generation,” stated Guido Gallopyn, VP of Healthcare Research at Nuance, who leads the project, in an interview with AI Trends. “It takes the transcript of what the doctor and the patient say and translates it into a clinical report.” Its ability rivals that of a human transcriber doing the same thing, he said.  

VCs Have Invested Over $800 Million in Health Chatbot Startups  

A recent analysis by Crunchbase found that VCs have invested more than $800 million in at least 14 known startups that offer some version of a chatbot with health features, according to a recent account in The Medical Futurist.  

The rise in healthcare chatbots can ease the burden on healthcare professionals, the authors stated, through better organization of patient pathways, medication management, help in emergency situations or with first aid, or offering a solution for simpler medical issues.  

Northwell Health in New York launched a chatbot incorporating AI to help reduce no-shows for colonoscopies, which were running at about a 40% rate among less-privileged patients, according to the Futurist account. The AI chatbot is being offered at Long Island Jewish (LIJ) Medical Center and Southside Hospital. Northwell states that the personalized chatbot will “encourage patients by addressing misunderstandings and concerns about the exam, delivering information in a responsive, conversational way over email or text”  

The ability of conversational chatbots with AI to complete a series of steps across different channels makes them powerful. “AI-based conversational chatbots are able to converse with users in a highly personalized way, even completing transactions, and across all channels such as web, mobile, messenger apps, and voice,” stated Chris Ezekiel, founder and CEO of Creative Virtual, a supplier of the V-Person virtual assistant, in an account in Interesting Engineering.   

“They are also implemented within contact centers as virtual contact center agents listening into conversations and advising on the best answer or next best action. The best deployments are where the AI and humans work in harmony to provide the best possible customer experience,” stated Ezequiel. 

Other trends in conversational AI chatbots include, according to an account from Analytics Insight:  

Greater personalization. Conversation bots are able to remember conversation context, past dialogues and user preferences. They can also understand sentiment and mood and respond accordingly, especially to cross-sell and up-sell products and services to users. .   

Augmented reality is making its way into chatbots, such as to show how a coffee table might look in your living room, or how some new clothes would fit you. Organizations including IDEA, Zara, Loreal, and Amazon are testing its potential.   

Business users in chatbot development. No longer the exclusive domain of developers and linguists, chatbot creation is now including business users who are closer to understanding customer needs, to make the chatbot more engaging. This includes script writers who guide the flow of conversation through brand value to open-ended questions.   

Selection of Healthcare-oriented Chatbots Described 

A selection of healthcare-oriented chatbots incorporating AI was assembled by The Medical Futurist site, including:  

Babylon HealthThis British subscription, online medical consultation and health service, founded in 2013, offers AI consultation based on personal medical history and common medical knowledge as well as live video consultation with a doctor when a patient needs it.  

Users report symptoms of their illness to the app, which checks them against a database of diseases, then offers an appropriate course of action. The UK’s National Health Service (NHS) began using the chatbot to dispense medical advice for a trial period in 2017. During the COVID-19 spread in 2020, the NHS launched an app-based AI triage model with Babylon, which is expanding in the US and worldwide.   

Ada Health. Over 1.5 million people have tried the health companion app, which aims to help assess the user’s health based on the indicated symptoms using its vast, AI-based database. 

Daniel Nathrath, CEO of Ada Health, Berlin, Germany

Daniel Nathrath, CEO of Ada Health, based in Berlin, Germany, stated to The Medical Futurist that in the future, “Ada will become a standard diagnostic tool for doctors. That is already the case; users can share their health assessment with their doctor or, in the UK, they can choose to consult with a qualified GP via our Doctor Chat feature. Ada will also become much more of an ongoing health companion, helping patients and doctors to intelligently monitor health data over the long term to enable predictive and proactive care.” A voice interface allows users to trial Ada through Amazon Alexa. 

HealthifyMe is India’s top health and fitness app, but the company’s long-term goal is to become the global leader, according to an account in TechCrunch. It is popular among Indian expat and Indian American communities and it will target other customer segments after raising $75 million in a Series C round recently, co-founder and CEO Tushar Vashisht stated.  

HealthifySmart and HealthifyStudio, its newest products, now contribute 25% to the company’s line. Its user base and revenue has doubled over the last year, recently surpassing 25 million downloads, and is currently on target to reach $50 million in annualized recurring revenue within the next six months. It has about 1,500 trainers and coaches on the platform, with plans to add 1,000 more to support its expansion. 

“Today in the U.S., you have free DIY calorie counting solutions like MyFitnessPal and expensive human-assisted coaching and diet solutions like Noom and WeightWatchers,” stated Vashisht. “But nothing in the middle exists that allows one to track nutrition and calories while getting advice at an affordable price point.”  

Read the source articles and information in CB Insightsin The Medical Futuristfrom Analytics Insight and in TechCrunch.  

Under the Hood of Enterprise Bot  

Pranay Jain, cofounder and CEO of Enterprise Bot, offered these responses to some queries from AI Trends about his product.   

How does the product achieve advanced natural language processing (NLP)? 

NLP allows a computer algorithm to understand and interpret a user’s request. Enterprise Bot 

uses a variant of Bidirectional Encoder Representations from Transformers or better known as BERT. BERT was created and published in 2018 by Jacob Devlin and his colleagues from Google and was then open-sourced. Using a bidirectional transformer allows us to truly understand the context of different words. Let’s look at two simple phrases “Book me a ship” vs “Ship me a book”. If you utilize a keyword approach or do not keep the context of how words co-relate you may not be able to do what the user really wants.  

How does the product achieve sentiment analysis?   

To achieve sentiment analysis, the Machine is given tens of thousands of examples of different sample datasets that help the machine learn examples of different sentiments. Using this sample data, the algorithm is able to understand and create a model of how different words are connected and is able to understand the sentiment. It also learns from other signals such as capitalization, exclamation marks, and many other indicators. The beauty of AI based on sentence structure is that it can understand the difference between what a teenager would mean by saying the word sick, and does not assume directly that this is related to a negative emotion.  

Does Enterprise Bot make any use of the GPT-3 large language model?  

We have benchmarked against GPT-3. [GPT-3] is an excellent resource for a generic model, but when you have industry-specific, pre-labelled data with a focused deep learning model, [Enterprise Bot] outperforms GPT-3. GPT-3 is still one of the best models out there for untrained and unlabeled data, but for large enterprises with specific goals, it presently scores lower than some other algorithms, including BERT.   

Anything else to mention about the AI tech underlying the product?  

AI is changing every day. Better AI models and higher training are enabling us to move from supervised to unsupervised models. Enterprise Bot is launching a new product soon that shall showcase how you can go live with a truly intelligent chatbot without having to spend days feeding it with samples and responses. This will truly change how conversational AI chatbots are made, updated, and improved in a way never seen before.  

Is anyone using Enterprise Bot in healthcare?  

Yes, we are presently piloting the solution with healthcare providers and we will be going live with a major healthcare provider to their customers in the next couple of months. Due to an NDA, we cannot reveal their name until then.   

About Enterprise Bot 

Enterprise Bot currently has diverse clients including Generali, Afterpay, and London North Eastern Railway (LNER). The company employs 70 people between Switzerland and India. The company generated €2 million in revenue in 2020, with over €4 million predicted for 2021. 

Learn more at Enterprise Bot. 

Gartner Hype Cycle for AI 2021 Highlights AI Orchestration, Governance 

By John P. Desmond, AI Trends Editor   

An update by Gartner analysts to its Hype Cycle for AI 2021 report, prepared by Gartner analysts Shubhangi Vashisth and Svetlana Sicular, identifies four AI megatrends that are underway:  

Companies are looking to operationalize AI platforms to enable reusability, scalability, and governance and speed up AI adoption and growth. AI orchestration and automation platforms (AIOAPs) and model operationalization (ModelOps) reflect this trend;  

Innovation in AI means efficient use of all resources, including data, models, and compute power. Multi-experience AI, composite AI, generative AI, and transformers are examples of this trend;  

Responsible AI includes explainable AI, risk management, and AI ethics for increased trust, transparency, fairness, and auditability of AI initiatives;  

Small and wide data approaches enable more robust analytics and AI, reduce dependency on big data, and deliver more complete situational awareness.  

“AI innovation is happening at a rapid pace, with an above-average number of technologies on the Hype Cycle reaching mainstream adoption within two to five years,” stated Vashisth, senior principal research analyst at Gartner, in a press release. “Innovations including edge AI, computer vision, decision intelligence, and machine learning are all poised to have a transformational impact on the market in coming years.”  

Svetlana Sicular, research vice president, Gartner

Svetlana Sicular, research vice president at Gartner, stated, “Increased trust, transparency, fairness and auditability of AI technologies continues to be of growing importance to a wide range of stakeholders. Responsible AI helps to achieve fairness, even though biases are baked into the data; gain trust, although transparency and explainability methods are evolving; and ensure regulatory compliance while grappling with AI’s probabilistic nature.”  

Reported in TechRepublic, the analysts also identified six technologies in what Gartner terms the “innovation trigger” phase of the hype cycle, which is on the way up to the “plateau of productivity” within two to five years. The six are:  

Composite AI 
AI orchestration and automation platform 
AI governance 
Generative AI 
Human-centered AI, and  
Synthetic data
Composite AI From SAS’ Point of View 

Gartner refers to composite AI as  the “combination of different AI techniques to achieve the best result.” Using multiple types of AI tech to solve a business problem is known by a different name by the senior director of analytics product management at SAS, Saurabh Gupta.   

“I’m used to saying multi-disciplinary analytics,” Gupta stated in a recent account in datanami 

It is possible machine learning alone could solve a straightforward business problem, “But in order to solve the problem fully, you’ve got to use the combination of techniques,” Gupta stated.  

The right choice of AI technology and techniques to use depends on the understanding of the AI engineers of the business problem they are trying to address, and the data sets available to help. “We always begin with the question in mind,” Gupta stated. “So what’s the business problem you’re looking to solve? And ultimately, what is the business decision you’re looking to make out of that particular business problem?”  

Once the business problem has been defined, the application designers next need to decide what data will inform the answer to the problem. “If the problem I’m looking to solve involves textual data or voice, as an example, then you know a lot of computer vision and NLP will come into play,” Gupta stated. “If I have primarily structured data, then I know that a lot of statistics, ML, forecasting will come into play.” 

AI Orchestration and Automation Example  

Dish Network this week announced in a press release that it has selected IBM as its partner to help automate its cloud-native 5G network,  

Dish will use IBM’s AI-powered automation and network orchestration software and services, to bring 5G network orchestration to the business and operations platforms of Dish. The operations of Dish’s cloud-native 5G network architecture will be driven by intent-driven orchestration features and AI of the IBM offering.   

Dish will use IBM’s Cloud Pak for Network Automation software, which is designed to “stitch” hardware and software resources together efficiently, enabling Dish to accelerate the creation and delivery of new services.   

Marc Rouanne, chief network officer, DISH Wireless

“We are creating a ‘network of networks,’ where each enterprise can custom-tailor a network slice or group of slices to achieve their specific business needs,” stated Marc Rouanne, chief network officer, DISH Wireless. “IBM’s orchestration solutions leverage AI, automation and machine learning to not only make these slices possible, but to ensure they adapt over time as customer use evolves.”  

AI Governance Includes Ethics, Transparency  

AI governance is defined as the practice of establishing accountability for the risks that come with using AI. 

Scandals that have occurred in the world of AI have led to a level of distrust. When Cambridge Analytica engaged in the massive harvesting and use of personal data of millions of Facebook users without their consent, it caused many to doubt that AI could be kept under control and be helpful to people, suggests an account in AI Time Journal.  

Many in the AI community are invested in policy frameworks, practices, and outcomes to be a force for positive good. In an attempt to define policy in AI, Tim Dutton, Founder and Editor-in-Chief of Politics+AI, defines it as “those public policies that maximize the benefits of AI, while minimizing its potential costs and risks.” 

Accountable AI should equate with transparent AI, in the view of Elina Noor, Director of Political-Security Affairs at the Asia Society Policy Institute. She noted that, “in the last few years, there has been increasing awareness of the need for greater transparency and accountability vis-a-vis AI algorithms. How is data being collected? What kinds of data sets are being compiled? How representative are data sets? Is bias accounted for or not? These are just a few questions that have to be clarified if AI is to be a positive force for the communities it serves.” 

Discussion of AI governance is being furthered by a growing understanding of the surrounding terminology. Stephan Jou, CTO of the Interset line of business at Micro Focus, pointed to the “common vocabulary, understanding, and definitions on what it means for AI to be ethical and responsible, how to enforce, and how to implement, are all prerequisites to making progress in what began as a very fuzzy, ill-defined area.”  

Generative AI Leading to New Applications  

This is where AI becomes the creator of content. Two technologies are at the heart of generative AI: generative adversarial networks (GANs) and variational autoencoders (VAEs), according to a recent account in Forbes.  

GANs are a way to train a generative model by framing the problem as a supervised learning problem with two sub-models: the generator model that is trained to generate new examples, and the discriminator model that tries to classify examples as either real or fake.   

In 2016, AI researcher Yann LeCun called GANs “the most interesting idea in the last ten years in machine learning.” GANs are used, for example, to generate 3D models required in video games, animated movies or cartoons.   

VAEs consist of two neural networks that work in tandem to produce an output. The first network (the “encoder”) takes a piece of input data and compresses it into a lower-dimensional representation. The second network (the “decoder”) takes this compressed representation and, based on a probability distribution of the original data’s attributes and a randomness function, generates novel outputs based on the original input.  

VAEs are deep learning techniques used to draw images, achieve state-of-the-art results in semi-supervised learning, as well as to interpolate between sentences.   

Read the source article and information  in a press release from Gartner, in TechRepublic, in datanamiin a press release from Dish Network,  in AI Time Journal and in Forbes. 

President Prioritizes Cybersecurity Protection, Response and Zero Trust 

By AI Trends Staff 

President Joe Biden in May released an Executive Order on improving the nation’s cybersecurity, an effort to respond to recent cyber espionage campaigns and fundamentally rethink how security is provided in the nation’s digital infrastructure. 

“In our view, the EO does two things,” stated an advisory from PwC, the professional services network. One, it calls for making the federal government systems stronger and safer, so they are more difficult to break into, and it pushes specific actions to modernize cybersecurity in the federal government, such as a zero trust architecture.   

Second, the EO sets a goal for more effective and agile federal government responses. It requires IT providers to report cyber incidents and removes contractual barriers for them to share information with government entities.   

US adversaries understand that the country has a digital economy, with much of the critical infrastructure in the hands of the private sector. Nation-states and criminal groups engage in ransomware attacks, cyberespionage, and disinformation operations that fuel social discord and make headlines.  

Sean Joyce, global cybersecurity practice lead, PwC

“The United States needs a more organized approach to these cyberthreats—one that enables the private and public sectors to work together ahead of attacks rather than play catch-up,” stated Sean Joyce, lead of PwC’s global cybersecurity practice, in a recent account in The Washington Post.  

Today, federal government responsibility for cyberspace is divided among the intelligence community, law enforcement, military, the Cybersecurity and Infrastructure Security Agency (CISA) and regulatory agencies, he notes. These agencies are among those with primary responsibility for protecting government networks, which are scanned and attacked every day, and protected by a range of tools to identify, protect again and respond to the threats.  

However, according to CISA, more than 80% of the energy infrastructure is owned and operated by the private sector. “These companies are expected to defend their assets against highly organized criminal groups that are sometimes surrogates for nation-state actors—and to do so without our government’s support,” states Joyce. “We need a different approach to protect our way of life.” 

His suggestion is to have one organization within the government headed by a new national cyber director overseeing three units: one unit focused on strengthening public-private partnerships, one on offensive and defensive operations, and one focused on intelligence collection, analysis and sharing. 

A centralized partnership unit led by the CISA director could coordinate cyber efforts on behalf of the government. That way, businesses know who to call.  

To ensure responses to attacks are handled in a coordinated manner, the unit focused on offensive and defensive operations would take a lead. An attack, such as the recent SolarWinds malware offensive that affected many businesses, would be handled by this unit, led by the FBI and NSA, agencies with primary jurisdiction in national security matters.   

This central unit could be staffed by both the private sector and government, to reinforce the public-private partnership needed to counter cyber threats, Joyce suggests.   

Cybersecurity on Agenda at Biden-Putin Summit in Geneva  

President Biden kept the focus on cybersecurity concerns during a meeting with Russian President Vladimir Putin on June 16 in Geneva. The US has accused Russia of letting cybercriminals and ransomware gangs within its borders a free rein to operate, according to a recent account in Dice. 

“This year has been focused on ransomware and IoT and critical infrastructure attacks that have been destructive in both operational capabilities and to the financial state of many businesses,” stated Tyler Shields, CMO at security firm JupiterOne. “We also saw a continued increase in the number of data breaches resulting from unknown and incorrectly configured data stores being targeted.”  

He suggested that, “Fundamentally, enterprises and critical infrastructure providers need to do a better job of knowing what assets exist in their network and how they are configured at any given point in time.”  

The Dice authors suggested the following cybersecurity trends bear watching through the end of 2021:  

Hybrid work models.  As more workers plan to split time between home and office work environments, the attack surface is expanded.   

“As employees return to the office, you can certainly expect an immediate uptick in support calls as infected devices attempt to connect directly to the corporate network,” stated John Morgan, CEO at Confluera, supplier of cloud security detection and response services. “What I think you should watch out for, though, is not the immediate uptick, but rather the attack that simmers slowly and travels under the radar. It’s those attacks that will slip through your fingers.”  

Hackers who have gained entry may be willing to wait weeks or months to begin an attack, lying dormant, maybe taking small steps not easy to detect. IT and security analysts need tools to correlate weak signals to make sense of an attack in progress, or to correlate events occurring weeks or months apart. “This gap in security coverage is what organizations should be concerned about,” Morgan stated. “Once an attacker gains access into a corporate device or network, they are in no hurry to navigate from servers to servers looking for their prize.”  

Return of Shadow IT. With employees returning to offices to some degree, they are likely to bring the devices they have come to rely on, or the apps they used to conduct their(work, which opens the door to a wave of shadow IT technology and the security issues that follow. (Shadow IT refers to information tech systems deployed by individuals or departments outside central IT.) 

“As employees transition back to the office, organizations find themselves needing to get a better handle on apps, services, and networks that could be accessed through personal devices,” stated Bert Kashyap, CEO and cofounder of security firm SecureW2, to Dice. “Implementing device trust through digital certificates is at the core of zero trust projects for the second half of the year, as organizations need assurances that device security standards are being met.” 

Zero trust. President Biden’s Executive Order on cybersecurity put zero trust at the top of the security priority list for the federal government. Experts see zero trust adoption skyrocketing in the second half of 2021. Zero trust essentially means the concept of trust is eliminated from an organization’s network architecture. Identities need to be verified. 

“Organizations need to strongly consider a zero trust approach to security, which can ensure damage is limited even in the case that privileged accounts are compromised,” stated Kevin Dunne, president of security firm Pathlock. “Rationalizing the applications, identities, access and roles into a manageable and understandable structure is the foundation of a zero trust architecture. From there, organizations can implement more investigative and preventative policies to ensure that the access that has been granted is being used as it was intended to be.”  

The meaning of zero trust is of course subject to interpretation, but it stems from a recognition that an organization cannot physically control every device its employees use anymore. Also, in the old model, once an attacker slips through a perimeter defense, remotely or by physically infiltrating an organization, the network would grant them a degree of trust and freedom. That may not be good enough anymore.  

Heather Adkins, senior director of information security, Google

Google experienced a sophisticated attack against its corporate network in 2011, stated Heather Adkins, Google’s senior director of information security, in a recent account in WiredHackers backed by the Chinese government rampaged through Google’s networks, exfiltrating data and code while trying to establish backdoors, so they could get back in if Google tried to kick them out,” she stated.  

The experience caused the company to rethink its cybersecurity system. “We realized that the way we were all taught to build networks just didn’t make any sense. So we went back to the drawing board,” Adkins stated. Today, “Even if someone had access to a Google machine, nothing trusts it. It’s much more difficult for an attacker because we’ve changed the battlefield.” 

Read the source articles and information from PwC, in The Washington Post, from Dice and in Wired. 

Enterprise IoT Deployments, Spurred by Growth in 5G, Pose New Challenges 

By AI Trends Staff  

The increased availability of 5G networks is spurring interest in enterprise IoT deployments, which is causing a range of questions to surface around how the deployments are going to work, especially with existing operational systems, according to a new report from technology research firm Information Services Group (ISG).  

Mobile network operators, trying to manage costs as they transition to 5G networks, may choose to shut down older networks to reallocate available spectrum to 5G, in a process known as “shuttering.” These events can happen on a per-operator basis, which makes them difficult to manage.  

Ron Exler, principal analyst, ISG Research

“While 5G hype is rampant, not all spectrum types are available in a particular country or on a particular mobile network,” stated Ron Exler, principal analyst with ISG Research and author of the report, in a press release. “Enterprises need to work with their network and IoT providers to manage the selection of replacement technologies in advance of shuttering events, preferably in conjunction with 5G deployment.”  

US enterprises are moving ahead on industrial IoT, interconnected sensors, instruments, and other devices networked together with industrial applications in, for example, manufacturing and energy. “Manufacturers need smart factories and supply chains to keep up with the competition,” stated Dave Lewis, partner, ISG Manufacturing. “Many organizations already have chosen their IoT platforms and use cases, and are looking to their service providers to help them scale deployments.”  

To respond to concerns from companies about cybersecurity, many IoT service providers are building security services into their consulting and managed services engagements, the report states. In particular, engagements focused on IT and operational technology (OT) convergence are concerned about cybersecurity. Many security services cover the entire IoT ecosystem from the edge to the cloud to on-premises systems.  

The convergence of IT and OT systems is a major challenge for enterprises deploying IoT systems, the report says. The use of legacy systems in both worlds can slow down integration and mobility. Service providers apply industry expertise and IT proficiencies to develop the converged systems.   

Service providers are also helping enterprises deploy artificial intelligence in IoT systems, including on the edge, the report adds. AI can help IoT systems better process large volumes of data, and service providers often offer customized AI for IT Operations (AIOps) systems for specialized customer needs.  

Move in On to Update Aging Industrial Control Systems  

Industrial control systems that have been adopted in many industries to monitor and control industrial assets are aging, and their maintenance is time-consuming. Many industries are seeing an opportunity to make the systems more efficient by integrating with digital process management workflows, according to a blog post from Flowfinity, The Vancouver, BC-based company supplies a platform for low-code development of business process applications.   

Modern IoT platforms offer an opportunity to reimagine the OT system and reduce the cost of maintaining complex integrations required to update existing control systems. Today, developers can buy pre-configured controllers with internal processing power and built-in database and workflow integrations. For example, developers can configure dashboards to visualize real-time sensor data, and couple that with existing OT systems to analyze large quantities of data quickly, providing a view in a large, physically-distributed utility or manufacturing facility.  

Predictive maintenance is one benefit. An IoT sensor can collect calibration data many times a minute. As soon as a machine falls out of an acceptable range, an edge device automatically flags the error and creates a maintenance order in a workflow solution. This allows an employee to step in to perform an assessment quickly and take corrective action to prevent a serious malfunction, or accurately predict when a future maintenance cycle should be scheduled to optimize uptime, the Flowfinity blog suggests.  

The company’s M1 Controller is said to enable this automation by being capable of taking control of a physical asset while also launching the appropriate workflow actions and recording data.  

Essential Steps to Secure Enterprise IoT Deployments 

The move to IoT applications brings with it new security risks, with each device added to the network representing a potentially vulnerable endpoint. Research from IDC analysts projects that the number of IoT devices will grow to 41 million by 2025, according to a blog post from Telit, an IoT communications firm based in London.  

Mihai Voicu, CISO, Telit

“Organizations cannot afford to ignore this,” stated the author, Mihai Voicu, chief information security officer at the firm. “By the time risks of compromise arise, it’s often too late to manage them.” Among his suggestions for securing IoT deployments:  

Ensure data encryption. Enterprise IT teams must ensure all data is encrypted in both directions from every endpoint, using standard encryption methods. “More endpoints means more entry points for bad actors,” Voicu stated. “Any internet-connected device presents an exploitable attack surface.”  

Minimize physical security threats to field devices. Determine which staff members have access, how they gain access, and how these permissions can be revoked when necessary. Have measures in place to ensure only authorized users can gain access to the devices, to ensure they are not compromised or stolen.  

Secure end-to-end communications. Edge computing introduces new vulnerabilities, such as not being able to change default passwords and operators unfamiliar with the evolving IoT cybersecurity landscape. Operators need to study best practices including end-to-end encryption, long-term plans for edge computing deployment, securing devices with strong passwords, encryption keys, and even biometric authentication systems.  

“From device to data transportation to platform communications, ensuring secure communications should be a priority,” Voicu stated. “The most effective way to guarantee data security is by choosing a single provider for all hardware, edge devices and management software. Having one provider reduces potential security risks and creates a sealed environment with fewer entry doors for cybercriminals.”  

Keep track of IoT devices. Fewer than 20% of respondents to a recent Ponemon Institute and Shared Assessments survey could identify most of their organization’s IoT devices. Further, 56% reported not keeping an inventory of IoT devices.  

“This study highlights a serious issue,” Voicu stated. Firmware and hardware must be updated with the latest security patches and additional features to remain optimally secure. Over time, devices may require battery replacements, repairs, or other maintenance. “If you take the time to consider how you’ll manage these tasks before deployment, you can minimize security risks,” he stated. 

Planning for the enterprise IoT deployment is critically important, suggests Nick Earle, CEO of Eseye, an IoT connectivity supplier. The track record of IoT deployments so far is not so good. Cisco Systems found that more than 75% of IoT deployments fail, and Microsoft estimated that 30% of IoT projects fail at the Proof of Concept (PoC) stage, according to an account from IoT news.  

Invest in device design, prototyping, and pre-deployment testing, Earle advised. Device design must consider the need to standardize and simplify production and deployment. “Appliances should be future-proofed,” he stated, suggesting that IoT devices with a single stock-keeping unit and one SIM are suited to global deployment.  

“The key to success is ensuring that your device performs in a predictable way,” Earle stated. “Devices not only need to connect to any network, but also adapt to network variances automatically and stay connected.”  

Read the source articles and information in a press release from ISG Research, blog post from Flowfinity, a blog post from Telit and from IoT news.