Let's talk about science fiction.
Neal Stephenson's 1992 novel Snow Crash is the book that launched 1,000 startups. This was the first book to use the Hindu avatar to describe a virtual representation of a person, coining the term “metaverse” and a decade before Mark Zuckerberg changed his focus to Facebook It was one of the required readings for new executives. The entire company is working hard to make Stevenson's fictional world a reality.
The plot revolves around images that, when displayed in the Metaverse, take over the viewer's brain, maiming or killing them. In the fictional world, images crash the brain, giving it input that it cannot process correctly.
It's an idea that comes up again and again in science fiction. Perhaps the first clear example of that appeared four years earlier in British science fiction writer David Langford's short story BLIT. This work envisions a terrorist attack using a “Basilisk” image that contains “an implicit program that cannot be safely executed on human equipment.” In a sequel to that story published in Nature in 1999, Langford drew parallels with his earlier story, in which he wrote “a famous skit about the funniest joke in the world that makes everyone who hears it laugh to death.” He even references Monty Python's Flying Circus.
The Collective Fiction Project SCP has given such an idea the name “cognitohazard.” Ideas that can be harmful in and of themselves.
And one question that should be taken increasingly seriously is: Are cognitohazards real?
What you know can hurt you
I started thinking about that question this week as part of a report on efforts to automatically identify deepfakes in election years around the world. Since 2017, when I first heard this term in the context of face-swapping porn, AI-generated images can now be identified through testing. However, the task has become increasingly difficult and has now reached the point where even experts in the field cannot handle it. So it's a race against time to build systems that can automatically find and label such materials before they cross that threshold.
But what if labeling isn't enough? From my story:
Henry Parker, director of government affairs at fact-checking group Logically, says that looking at watermarks doesn't always have the desired effect. Parker said the company uses both manual and automated methods to vet content, but labeling is limited. “Even if you tell someone they’re watching a deepfake before they watch it, the social psychology of watching that video is so strong that they will still refer to it as if it were a fact. So the only thing you can do is think about how you can shorten the time this content is in circulation.”
Can such videos be called cognitohazards? Videos that are so convincing and realistic that you can't help but treat them as real, even when you're told otherwise, meet this requirement. It seems like it is.
Of course, this also explains a lot of fiction. Horror stories that haunt your head and keep you up at night, or scenes of graphic violence that make you sick, can become cognitohazards if you stretch the definition that far.
dominoes fall
Perhaps the closest example in fiction is a technique that hijacks our attention rather than our emotions. After all, emotions are rarely under our control at the best of times. Feeling something you don't want to feel is almost the definition of a negative emotion.
Attention must be different. It's something we can consciously control. We sometimes talk about “distractions,” but more severe bouts of attention require increasingly medicalized terms such as “obsessions,” “obsessions,” and “addictions.” Become.
The idea that technology attacks our attention is not new, and there is a whole concept of an “economy of attention” that underpins that barrage of attention. In a world of advertising-supported media, instead of companies competing directly for money, they are essentially competing for their time, which is limited to 24 hours a day, and there is a huge need to attract and maintain attention. Commercial motives exist. Some of the trading tools developed to achieve that goal do feel like they tap into something fundamental. Things like the prominent red dot on new notifications, the tactile feel of pull-to-refresh feeds, and the constant push for gamification are all discussed in detail.
And I think some people have crossed the line and become real cognitohazards. Perhaps they're only dangerous for those whose attention is susceptible to hijacking, but the obsession feels real.
One is a type of game. “Clicker” games and “idle” games, such as the popular Universal Paperclip, distill the game's reward mechanism down to its simplest structure. Idle games, so called because they almost literally play themselves, feature a dizzying array of timers, countdowns, and upgrades, constantly delivering breakthroughs, improvements, and efficiencies in just seconds. . Like many others, I lost an entire day of productivity to them.
The other type of content is what I've begun to think of as a “domino video,” which is the non-interactive equivalent of an idle game. A video in which a process progresses in an orderly but not entirely predictable manner. It draws you in and gives you an irresistible urge to keep watching until the end. Sometimes it literally falls like dominoes. Someone might also carefully clean the carpet or remove pilling from your sweater. In some cases, the process may not complete. Pong Wars is an automatically generated breakout “game” in which two balls each threaten to invade the other's space. It'll never end, but you'll probably end up watching it for longer than it's worth.
This could be the worst. Maybe there's something inherently unpleasant about true attention seekers. It means that their desire to see progress is always counteracted by shame and disgust at having wasted their time.
But what if that's not the case? What happens when generative AI is unleashed on social media and gains traction on a truly industrial scale? The advice parents give their young children changes who they talk to on the internet. If it means not only being careful, but also being careful about what you say on the internet. look in?
It's all science fiction until it becomes reality.
If you want to read the full newsletter, subscribe to receive TechScape in your inbox every Tuesday.
Join Alex Hearn for the Guardian Live online event on AI, deepfakes and elections on Wednesday 24th April at 8pm BST. Book your tickets here.