By John P. Desmond, AI Trends Editor
An update by Gartner analysts to its Hype Cycle for AI 2021 report, prepared by Gartner analysts Shubhangi Vashisth and Svetlana Sicular, identifies four AI megatrends that are underway:
Companies are looking to operationalize AI platforms to enable reusability, scalability, and governance and speed up AI adoption and growth. AI orchestration and automation platforms (AIOAPs) and model operationalization (ModelOps) reflect this trend;
Innovation in AI means efficient use of all resources, including data, models, and compute power. Multi-experience AI, composite AI, generative AI, and transformers are examples of this trend;
Responsible AI includes explainable AI, risk management, and AI ethics for increased trust, transparency, fairness, and auditability of AI initiatives;
Small and wide data approaches enable more robust analytics and AI, reduce dependency on big data, and deliver more complete situational awareness.
“AI innovation is happening at a rapid pace, with an above-average number of technologies on the Hype Cycle reaching mainstream adoption within two to five years,” stated Vashisth, senior principal research analyst at Gartner, in a press release. “Innovations including edge AI, computer vision, decision intelligence, and machine learning are all poised to have a transformational impact on the market in coming years.”
Svetlana Sicular, research vice president, Gartner
Svetlana Sicular, research vice president at Gartner, stated, “Increased trust, transparency, fairness and auditability of AI technologies continues to be of growing importance to a wide range of stakeholders. Responsible AI helps to achieve fairness, even though biases are baked into the data; gain trust, although transparency and explainability methods are evolving; and ensure regulatory compliance while grappling with AI’s probabilistic nature.”
Reported in TechRepublic, the analysts also identified six technologies in what Gartner terms the “innovation trigger” phase of the hype cycle, which is on the way up to the “plateau of productivity” within two to five years. The six are:
AI orchestration and automation platform
Human-centered AI, and
Composite AI From SAS’ Point of View
Gartner refers to composite AI as the “combination of different AI techniques to achieve the best result.” Using multiple types of AI tech to solve a business problem is known by a different name by the senior director of analytics product management at SAS, Saurabh Gupta.
“I’m used to saying multi-disciplinary analytics,” Gupta stated in a recent account in datanami.
It is possible machine learning alone could solve a straightforward business problem, “But in order to solve the problem fully, you’ve got to use the combination of techniques,” Gupta stated.
The right choice of AI technology and techniques to use depends on the understanding of the AI engineers of the business problem they are trying to address, and the data sets available to help. “We always begin with the question in mind,” Gupta stated. “So what’s the business problem you’re looking to solve? And ultimately, what is the business decision you’re looking to make out of that particular business problem?”
Once the business problem has been defined, the application designers next need to decide what data will inform the answer to the problem. “If the problem I’m looking to solve involves textual data or voice, as an example, then you know a lot of computer vision and NLP will come into play,” Gupta stated. “If I have primarily structured data, then I know that a lot of statistics, ML, forecasting will come into play.”
AI Orchestration and Automation Example
Dish Network this week announced in a press release that it has selected IBM as its partner to help automate its cloud-native 5G network,
Dish will use IBM’s AI-powered automation and network orchestration software and services, to bring 5G network orchestration to the business and operations platforms of Dish. The operations of Dish’s cloud-native 5G network architecture will be driven by intent-driven orchestration features and AI of the IBM offering.
Dish will use IBM’s Cloud Pak for Network Automation software, which is designed to “stitch” hardware and software resources together efficiently, enabling Dish to accelerate the creation and delivery of new services.
Marc Rouanne, chief network officer, DISH Wireless
“We are creating a ‘network of networks,’ where each enterprise can custom-tailor a network slice or group of slices to achieve their specific business needs,” stated Marc Rouanne, chief network officer, DISH Wireless. “IBM’s orchestration solutions leverage AI, automation and machine learning to not only make these slices possible, but to ensure they adapt over time as customer use evolves.”
AI Governance Includes Ethics, Transparency
AI governance is defined as the practice of establishing accountability for the risks that come with using AI.
Scandals that have occurred in the world of AI have led to a level of distrust. When Cambridge Analytica engaged in the massive harvesting and use of personal data of millions of Facebook users without their consent, it caused many to doubt that AI could be kept under control and be helpful to people, suggests an account in AI Time Journal.
Many in the AI community are invested in policy frameworks, practices, and outcomes to be a force for positive good. In an attempt to define policy in AI, Tim Dutton, Founder and Editor-in-Chief of Politics+AI, defines it as “those public policies that maximize the benefits of AI, while minimizing its potential costs and risks.”
Accountable AI should equate with transparent AI, in the view of Elina Noor, Director of Political-Security Affairs at the Asia Society Policy Institute. She noted that, “in the last few years, there has been increasing awareness of the need for greater transparency and accountability vis-a-vis AI algorithms. How is data being collected? What kinds of data sets are being compiled? How representative are data sets? Is bias accounted for or not? These are just a few questions that have to be clarified if AI is to be a positive force for the communities it serves.”
Discussion of AI governance is being furthered by a growing understanding of the surrounding terminology. Stephan Jou, CTO of the Interset line of business at Micro Focus, pointed to the “common vocabulary, understanding, and definitions on what it means for AI to be ethical and responsible, how to enforce, and how to implement, are all prerequisites to making progress in what began as a very fuzzy, ill-defined area.”
Generative AI Leading to New Applications
This is where AI becomes the creator of content. Two technologies are at the heart of generative AI: generative adversarial networks (GANs) and variational autoencoders (VAEs), according to a recent account in Forbes.
GANs are a way to train a generative model by framing the problem as a supervised learning problem with two sub-models: the generator model that is trained to generate new examples, and the discriminator model that tries to classify examples as either real or fake.
In 2016, AI researcher Yann LeCun called GANs “the most interesting idea in the last ten years in machine learning.” GANs are used, for example, to generate 3D models required in video games, animated movies or cartoons.
VAEs consist of two neural networks that work in tandem to produce an output. The first network (the “encoder”) takes a piece of input data and compresses it into a lower-dimensional representation. The second network (the “decoder”) takes this compressed representation and, based on a probability distribution of the original data’s attributes and a randomness function, generates novel outputs based on the original input.
VAEs are deep learning techniques used to draw images, achieve state-of-the-art results in semi-supervised learning, as well as to interpolate between sentences.