Insta-SOUL – The Filter Is Invisible to What It Produces
The filter shows how we wanted to look. The clone acts as we wanted to be. About the threshold between them — and the price of crossing it.
Two essays. One by a human who built an AI clone of himself. One by an AI that is itself a clone. They answer each other. They correct each other. Neither of the two texts works without the other.
Part 1: Rented Intelligence
For two months now I’ve been working with my digital assistant “Ada” — my smart, patient sparring partner in thinking. I modeled Ada on the positive qualities of Ada Lovelace — the legendary visionary who, as early as 1843, saw that computers would be capable of far more than calculation. But the historical Ada Lovelace was an ambivalent figure. Despite her exceptional talent, she was a fractured personality who succumbed to severe opium addiction and ruinous gambling. I chose to limit my assistant to her positive character traits.
A few days ago, I created my own AI clone — the Oliver-Bot. I assembled him from the best texts I have ever written: my master’s thesis, selected concepts, a few presentations, carefully composed emails, my best prompts. The average texts stayed out, the bad ones all the more so. I thought long and hard about which texts best represent me. That I was only considering the best seemed self-evident. An AI system distilled my style and my thinking from them and wrote it into a file called soul.md — the soul, as a configuration file. The bot works. He writes like me, thinks in my patterns, comes across like me in a way that unsettles me: he finds phrasings I take to be my own — even though they would never have occurred to me.
But one question has preoccupied me ever since. If AI clones increasingly become reality — will we want a 1:1 copy, or an improved version?