Funny AI: Review of AI in Cartoons and Comic Strips 

By John P. Desmond, AI Trends Editor  

Inevitably, AI has become a ripe subject for cartoons and comic strips. Cartoons are sometimes used to teach new concepts or present difficult material, as a way to ease into it. Cartoon comics designed to make people laugh often tap into fears that AI will be taking over for humans, rendering humans useless. No one wants to be useless.  

Here is a review of some AI cartoons.  

Scott Adams is the creator of Dilbert, the office worker who is an IT guy. The strip captures mismanagement and workplace inefficiency, bringing it to life with a cast of characters that tend to be lazy, holding onto their middle management positions.   

In one example, Dilbert’s boss says, “I want you to create artificial intelligence that is as smart as me.” Dilbert says, “Okay, I should have that by lunchtime.” His boss says, “Because you’re a fast worker?” Dilbert says, “Sure.” 

Describing why something is funny is not funny, but here it goes: it’s funny because Dilbert is making fun of his boss. AI doesn’t have much to do with it, except the less than brilliant boss is easily fooled around the subject.   

Timo Elliott describes himself on his website as a “passionate advocate of AI,” as well as a cartoonist, writer, and for his day job, a VP and Global Innovation Evangelist for SAP in Paris, where he has worked for over 14 years, according to LinkedIn. Elliott offers cartoons from his website for anyone to use, as long as you write to him and let him know how.   

One of Elliott’s cartoons shows two bald businessmen in suits, going back and forth on some issue, with a younger guy with a laptop at the table, who says, “When you two have finished arguing your opinions, I actually have data!”  (See cartoon above.)  

This speaks to the notion that it’s the younger generations who have a leg up on what AI is and how to apply it to the business. The older folks would do well to encourage them and listen.  

Another cartoon from Elliott shows four robots sitting on a bench, while a job candidate talks to a lady receptionist, who says, “Actually, yes, we did let an algorithm choose the shortlist of candidates… Why do you ask?”  

This gets at a couple of issues: one, the problem of bias in data producing essentially a self-selected sample, the challenges overall in injecting AI into applications that evaluate candidates for hire, and the fear about robots taking over and replacing humans.  

KDnuggets on Ethics and Self-Driving Cars  

Gregory Piatetsky-Shapiro, president and cofounder, KDnuggets

The founder of KDnuggets, the website focused on technical topics including AI, analytics, big data and data science, also publishes some cartoons. In a recent post, Gregory Piatetsky-Shapiro, the president of KDNuggets, published a cartoon on the challenges around programming ethics into self-driving cars.  

In the classroom, one of the cars asks the professor, “So what should we do? Drive straight and run over a baby or swerve and hit two pedestrians?” Then the car adds a follow-up question, “What if one of the pedestrians is your mother-in-law?” 

Mother-in-law jokes are dated, old as the hills. We know that. This one does present a real problem for programmers of self-driving car routines, on how to handle an ethical dilemma in a real-world driving scenario. It’s not only an ethical dilemma, it’s a super technical challenge. The AI would need to detect that a baby is at risk, then it would have to factor in which life was more valuable, the baby’s who is just starting out, or the older folks who have been around for a while? That’s some challenging coding to have to deal with, level 5 stuff, which our AI Trends Insider columnist Lance Eliot has written about many times in AI Trends.     

Here is another one from KDnuggets: The hiring lady is saying to the job candidate sitting in front of her, “I’m a little surprised. With such extensive experience in predictive analysis, you should’ve known we wouldn’t hire you.”   

One site, Artificial Intelligence Comic Strips, put out by The Cartoonist Group, gathers cartoons from over 18 newspaper comic strips.  

In an episode of Speed Bump by Dave Coverly, two robots are sitting on a couch across from a couple, and everyone has a drink. One robot says, “Do you have to use finger quotes every time you mention my artificial ‘intelligence?’” That gets at the fear that robots have become so human-like, we have to worry about offending them.   

This AI Generates the Cartoon AI 

Now, inevitably as well, we have a category of cartoons generated with the help of AI on an AI subject.  

Ilan Manouach, Cofounder and CEO, Applied Memetic

Comics artist Ilan Manouach and AI engineer Ioannis Siglidis are behind the Neural Yorker published in The New Yorker magazine. Manouach is an artist, researcher, strategy consultant and founder of Applied Memetic, based in Brussels. The firm is described as a synthetic media company using AI to understand and expand human creativity.   

The two developed an AI engine that posts computer-generated cartoons on Twitter, according to an account in Hyperallergic. The image-and-caption combinations are produced by a generative adversarial network (GAN), a deep-learning-based model. The network is trained using a database of punchlines and images of cartoons found online. It then “learns” to create new gags in the New Yorker‘s iconic style, with sometimes hilarious and unsettling results.  

Here is one example: “I think we ought to talk about something very substantive,” says what looks like sperm in the process of fertilizing an egg.  

Another example: “Please Harry, no I cannot use your name,” says a woman to what looks like a desktop computer. One observation is, because the cartoons are drawn by the AI, they are not so easy to understand or convey.  

Manouach and Siglidis stated to Hyperallergic, “The cartoon format thrives on quirkiness, absurdity, arbitrariness, and cheap artifice in order to get their simple message through.”  

“Cartooning is paradoxically a 21st-century art form catering to a readership with limited attention for a quick visual gratification fix,” they stated. “The Neural Yorker explores the limits of an important feature in the history and the modes of address of cartoon making: the non sequitur.”  

A non sequitur is a conclusion that does not follow or something not related to anything previously said. The AI has an advantage in that it essentially doesn’t know what it’s doing, all by itself. The Hyperallergic account described the strip as “highlighting the subjectivity of humor.”  

Manouach is currently co-editing a glossary on artificial intelligence and working on Fastwalkers, a manga comic book written with emergent AI. Siglidis is starting a PhD program on Deep Learning and Computer Vision at the École des Ponts ParisTech. 

Read the source articles and information at Dilbert, on the website of Timo Elliott, at KDnuggets, on the AI Trends website of Lance Eliot and in Hyperallergic.