The era of artificial intelligence has begun, and many new concerns have arisen. A lot of effort and money is being put into ensuring that AI can only do what humans want it to do. However, what we should be more afraid of is AI that does what humans want. The real danger is us.
That's not the risk the industry is trying to address.in FebruaryAn entire company called Synth Labs was founded for the express purpose of “AI tuning” to make things behave the way humans intended. Investors include Microsoft-owned M12 and First Start Ventures, founded by former Google CEO Eric Schmidt. The creator of ChatGPT, his OpenAI, dedicates his 20% of its processing power to “super alignmentIt will be “steering and controlling AI systems that are much smarter than we are.” Big tech is involved in this.
And that's probably a good thing. rapid clip A state of AI technology development. Most conversations about risk are about the potential consequences of AI systems deviating from their programmed objectives and pursuing goals that are not in the interest of humans. We can all support the concept of AI coordination and safety, but this is only the risk side. Imagine what would happen if AI became a reality? do Do what people want.
Of course, “what people want'' is not monolithic. Different people want different things, and there are countless ideas about what constitutes the “greater good.” I think most of us are rightly concerned about whether artificial intelligence is consistent with President Vladimir Putin's and Kim Jong Un's vision of an optimal world.
Even if we could all focus on the well-being of humanity as a whole, we are unlikely to agree on what that looks like. Elon Musk made this clear last week on his social media platform, X, when he shared: Concerned About AI promoting “forced diversity” and being too “woke.” (This follows Musk's lawsuit against OpenAI, alleging that the company has not met his requirements.) promise Developing AI for the benefit of humanity. )
Extremely prejudiced people may truly believe that it is in the interests of humanity as a whole to kill those they deem abnormal. “Human-aligned” AI is inherently as good, bad, constructive, or dangerous as the people who design it.
Google DeepMind, the company's AI development division, recently announced that internal organization We focus on AI safety and preventing malicious actors from manipulating AI. But it's not ideal that what is “bad” is determined by a handful of individuals in this particular company (and a handful of people in similar companies). Including their blind spots and personal problems. cultural bias.
The potential problem goes beyond humans harming other humans. “Good” things for humanity have happened many times throughout history. sacrifice of other sentient beings. Such is the situation today.
In the United States alone, billions of animals subjected to confinement, torture, and denial of their basic acts; Psychological Meet your physiological needs at all times. Entire species are being subjugated and systematically slaughtered so that we can eat omelets, hamburgers, and shoes.
If AI does what “we” (those who program the system) want it to do, it will likely carry out this collective atrocity more efficiently, at a larger scale, and in greater quantities. That would mean. automation And there's less of a chance for a sympathetic human being to step in and alert you to something particularly frightening.
In fact, in factory farming, this already It's happening, albeit on a much smaller scale than could be possible. Major animal product producers such as US-based Tyson Foods, Thailand-based CP Foods, and Norway-based Mowi are using AI systems to streamline animal production and processing. Experiments have begun. These systems have been tested to perform activities such as feeding animals, monitoring their growth, cutting marks on their bodies, and controlling their behavior by interacting with them using sound and electric shocks. Masu.
A better goal than aligning AI with humanity's immediate interests is what I call sentient alignment: humans, all other animals, and, if sentient AI exists, all AI acting in accordance with the interests of sentient beings. In other words, if an entity has the potential to experience pleasure or pain, the AI system must consider its fate when making decisions.
This may be seen by some as a radical proposition, since what is good for all sentient life does not necessarily equate to what is good for humanity. It may sometimes or often be at odds with what humans want and what is best for the majority of us. That might mean, for example, that AI eliminates. Zoo, Destroy and reduce unnecessary ecosystems wild animal suffering or Prohibition of animal testing.
recently, podcast Peter Singer, philosopher and author of the groundbreaking 1975 book Animal Liberation, writes in All Things Considered that the ultimate goals and priorities of AI systems are argued that it is more important than coordination.
“The question is, is this superintelligent AI going to be benevolent and create a better world?” Singer said. will be considered. The benefits to non-human animals and AI may outweigh that, but I think it's still a good outcome. ”
I agree with Singer on this matter. The safest and most considerate thing we can do seems to be to take into account intelligent life forms other than humans. even if the interests of those beings conflict with what is best for humans. Eccentricizing humanity to some degree, especially to such an extreme degree, is an idea that poses challenges to people. But it is necessary to prevent current speciesism. is proliferating In a new and terrible way.
What we really need to ask is for engineers themselves to be more considerate when designing technology. When thinking about “safety,” let's consider what “safety” means not only for humans but for all living things. When we aim to make AI “benevolent,” let us mean benevolence towards the world as a whole, not just the single species that inhabit it.
Brian Kateman is the co-founder of the Reducetarian Foundation, a nonprofit organization dedicated to reducing society's consumption of animal products. His latest book and documentary is “Meet Me Halfway”