These blog post series question our relation with state-of-the-art machine learning systems on text generation. To do that, I feed an idea or sentence to an algorithm and write its output here.
The goal is here is to show that despite the quality of the generated text we should not forget that it is just a machine, albeit a bit more complex than the ones we had before. Nothing here is “intelligent” here despite being labeled as “artificial intelligence”.
On deep fakes
We live in a world where artificial intelligence can supposedly replicate almost anything – a world where history seems to be becoming just a screen on which we can watch our lives being made and destroyed. Of course, since we live in a world of ours, history isn’t really history. There is no real history, just our version of it. And this is precisely the kind of lie that makes us comfortable with the idea that the lives of nearly everyone in the world are the subject of new and ever more elaborate digital performances.
Just a few days ago it was Wile E. Coyote shooting through a camera obscura – that is, it was a recent video showing him plunging through the screen of a past video that you don’t know exists. On Tuesday it was Deep Fakes – computer-generated films in which experts have shown how they can create unassailable (read: fake) videos of people talking. This isn’t a problem of the digital realm, where computers and machines can theoretically create anything they want and enjoy doing so. This is a problem of the non-digital realm – where our interest in the past is often undermined by a cult of personality – and here, big ugly lies can explode more easily.
I am nowhere near up to all the tricks and techniques at the disposal of Deep Fakes, but according to Mark Zuckerberg’s “cryptophysics lab,” the only way to make something appear to be real is to start with something real – in other words, with something that already existed. This strikes me as amazingly backwards and foolish. No single, small set of tools – digital or otherwise – can do everything we’ve come to expect of machines, or of people. And truth be told, the whole of history is one big, weird mess, in which technology and human capability are inconsistent and at odds.
Not long ago I was walking with the anthropologist Naomi Oreskes in Jerusalem. Naomi points out that before there was Gutenberg there was a variety of small print collectors around Europe who had made several copies of the Bible over and over again. At every turn, these men were able to trick conventional, digital checkers into doing their bidding. “But they couldn’t prove anything,” Naomi told me. In her view, the fraudsters could not possibly have known they were deceiving their audiences. “You couldn’t prove anything they said was real.” This is especially true in the non-digital realm.
It is possible to believe that when you look at a photo, whether it’s a cell-phone photo of your partner’s dinner, a dramatic movie poster, a monument to a dead relative or a terrifying image of the Messiah, you are watching a real event. What’s more, you are also spending money – money which you don’t understand is being spent on you and which you don’t understand is being spent at all. If you’re like the people today buying music on Spotify, they’re investing their hard-earned cash into something that they don’t know is real, they’re investing it in a technology that doesn’t know anything about them, they’re investing it in a medium that is less real than words, they’re investing it in something that is just as promising but doesn’t exist.
On Tuesday, the Awl asked a number of experts about the emergence of Deep Fakes. Those experts found it alarming, said that certain kinds of humans could be very easily fooled, and that it was possible to use these new kinds of tricks to scam a number of people. But the evidence suggests that only a very small number of people will actually be tricked. And, for the record, it’s also possible to genuinely get fooled by those works of art that you thought were fake. Forces of addiction like this one do not exist.
There is no “deep faking.”
The text above was generated by a machine. It does not reflect my opinions.