Mimetic AI—Mimesis and Artificial Intelligence

Artificial intelligence and two of its sub-domains, machine learning and deep learning, often develop with the aid of mimetic algorithms.

Financial engineers use mimetic algorithms to drive momentum trading in stocks and other financial assets; sex robots are programmed to mimic the facial expressions and flirtatious voices of their suitors; and even moral theorists and behavioral economists have been using mimesis to try to determine what a self-driving car should do, for instance, if it has to make a decision between avoiding one person and hitting another one, or in cases of traditional moral casuistry. In many of these algorithms, researchers simply gather data about what the majority of people in different cultures around the world say they would do, and use that to program a form of mimetic morality into the machines.

In the moral domain, mimesis is often used over and against “anchored values”, which are intrinsic or anchored to something that is not subject to mimetic delta. Tae Wan Kim, Thomas Donaldson, Mimetic vs Anchored Value Alignment in Artificial Intelligence, arguing for the superiority of anchored values rather than mimetic values in mimetic values when it comes to things like self-driving cars.

The abstract of that paper reads:

”Value alignment” (VA) is considered as one of the top
priorities in AI research. Much of the existing research focuses
on the “A” part and not the “V” part of “value alignment.”
This paper corrects that neglect by emphasizing the “value”
side of VA and analyzes VA from the vantage point of
requirements in value theory, in particular, of avoiding the
“naturalistic fallacy”–a major epistemic caveat. The paper
begins by isolating two distinct forms of VA: “mimetic” and
“anchored.” Then it discusses which VA approach better avoids
the naturalistic fallacy. The discussion reveals stumbling
blocks for VA approaches that neglect implications of the
naturalistic fallacy. Such problems are more serious in mimetic
VA since the mimetic process imitates human behavior that
may or may not rise to the level of correct ethical behavior.
Anchored VA, including hybrid VA, in contrast, holds more
promise for future VA since it anchors alignment by normative
concepts of intrinsic value.

And the conclusion of their study:

The preceding discussion reveals stumbling blocks for VA
approaches that neglect implications of the naturalistic fallacy.
Such problems are more serious in mimetic VA since the
mimetic process imitates human behavior that may or may
not rise to the level of correct ethical behavior. Anchored
VA, including hybrid VA, in contrast, holds more promise for
future VA since it anchors alignment by normative concepts
of intrinsic value.