Hello, my name is Maurice. I am an alum of Webster University in Ohio. I live in Missouri and am very interested in “Jewish in St. Louis” community activities. These all are lies about me, and I can’t correct them. This misinformation, likely the result of human input errors, is embedded in computer data and is what some algorithms “think” is true when they send me emails. In this case, the consequences are benign; I actually enjoy learning about the Jewish community in St. Louis. What’s scary is how computer algorithms, AI, and all our technology can take data that is perfectly true and turn it into “facts” that are frighteningly false.
What's scary is how computer algorithms, AI, and all our technology can take data that is perfectly true and turn it into "facts" that are frighteningly false.
Algorithms might not lie, but they cannot act with integrity or parse information like humans can. Two past occurrences I experienced could have led to bad consequences. The first was in the late 1990s, when the internet was very young. A client of mine invented a little plastic strip that, when placed in an autoclave, changed color when medical instruments and the like were thoroughly sterilized. We were having a lot of success marketing the product to dental offices and were brainstorming other industries in which the product could be used.
Someone suggested researching tattoo and piercing parlors, which had been given a lot of attention because of issues with non-sterile needles and implements. At home I went online and began searching, coming across lots of disturbing information, odd tattoo art, details about intimate piercings (I don’t ever want to know what a “Prince Albert” is), and links to porn sites. Luckily the algorithms weren’t as sophisticated then, and my identity wasn’t associated with these sites.
By 2012, the online world was a lot more sophisticated. My lovely 18-year-old Siamese mix cat was diagnosed with diabetes, and because there is no synthetic feline insulin, I had to give her shots of human insulin. I bought the insulin and syringes at the local Costco. Soon after, when I went online, I began seeing ads for all sorts of diabetes products, definitely for people, not cats. A period of anxiety followed when I expected the world to think I was diabetic, with possible impact on insurance and the like. In this case, the inputs the algorithms provided were perfectly accurate, but the conclusion was not, because algorithms can’t understand context.
My favorite thought experiment would be to test an AI against a person with the question, “Do these pants make me look fat?” Possible answers:
“No, they are flattering.” Presumably the AI could provide this answer only if it were true. For a human, this answer could be true (they really like the way the pants look), or it could be a total lie, with the intent to spare the inquirer’s feelings.
“Yes, they do make you look fat.” Again, the AI would answer this way if it were true. The human could as well, but the motivation would be more complex–to deliberately hurt the inquirer as a “bad” act, to be “honest,” or possibly to try to spare the inquirer embarrassment in public, which could be interpreted as a kind act.
“You look fabulous in anything.” For a human, this could be technically true, but more likely a classic white lie. I can’t see how an AI could come up with this answer.
As we are finding out in our current political environment, context and integrity mean a lot more than whatever “facts” are thrown around. It’s going to be a puzzling world for some time to come.
I have recently retired from a marketing and technical writing and editing career and am thoroughly enjoying writing for myself and others.