Frank Lantz makes some good points on the “AI deep fakes are going to doom society” handwringing.
This quality of language, its infinite plasticity and our capacity to navigate its mercurial meanings, is one of the reasons I’m not all that worried about the impact of deep fakes. Our world is heavily mediated by language, it governs our social, practical, institutional, and personal interactions, and it is trivially easily for anyone to use it to generate illusions that are perfectly indistinguishable from reality. I can make up something from whole cloth and tell you that I saw it with my own eyes, or heard it from a friend, or read it in a paper, and there is absolutely no way to tell, from the text itself, that it is fake. I can put quotation marks around any statement and tell the world you said it, and this illusion will have perfect fidelity – there are no possible forensic tools that could ever, simply by looking at the text itself, show it to be fake. This is the world we already live in, and we do OK.
Another reason that deep fakes won’t, in my opinion, cause that much trouble is the fact that, for the most part, humans don’t reason by looking at evidence and drawing conclusions from it. Mostly, we start with conclusions, based on what feels right, and then use our reason to construct plausible explanations for our beliefs and actions. The idea that we would look at a really convincing, high-res picture of William Shatner shoplifting and then conclude that he was a thief is based on a naïve theory of how our minds work that doesn’t bear close scrutiny. We are far more likely to arrive at that conclusion if a good friend mentions it casually as a well-known fact.
Even then, imagine hearing that statement. Don’t imagine a hypothetical “poor helpless stupid internet person” hearing it; picture famous smart person you, yourself, hearing a friend say “William Shatner is a thief”, and think about what your reaction might be…
When you think about it, the whole system is a real mess, but it sort of works. It could definitely be improved, but overall it’s probably working better now than it was a few thousand years ago when complete nonsense was even more rampant. And even those dark times were probably better than 150 thousand years ago, when, limited to pointing and grunting, we couldn’t lie at all.