As it turns out, interviewing a professional artificial intelligence ethicist got me engrossed in hours of meandering philosophical conversations. I was a little perplexed at first because I have a column to write and interviews with sources are usually pretty easy.
When I thought about it, I realized that my predicament was humorous. What else can we expect from people whose careers depend on a talent for deep inquiry?
Get daily headlines sent straight to your inbox.
Subscribe to our newsletter to stay up to date with the latest information in and around USC.
Morgan Sutherland grew up on Nantucket Island, Massachusetts. In his youth, he felt suffocated by the cramped confines of his small-town environment and sought something bigger and more meaningful, guided by a combination of philosophy, psychology, art, technology, and history. When we mixed everything together, the result was obvious. It is a rich and unharvested field of artificial intelligence.
After a decade of networking, freelancing, consulting, and interdisciplinary experience, Sutherland embarked on a six-month project with OpenAI to conduct research and engage directly with OpenAI leadership. Sutherland's team has ridden a wave of interest in fine-tuning his AI model to complete sentences and reconcile value conflicts in a way that fosters a net positive impact on humanity. .
Another thing I quickly learned was that the title “AI Ethicist” didn’t fully encapsulate the profession I was so passionately trying to figure out.
Rather, you should know three general terms: AI Ethics, AI Safety, and AI Collaboration. Each subfield attracts its own people and subsequent ideological backgrounds. For example, while AI ethics practitioners tend to have humanities backgrounds and focus on diversity, accessibility, and socio-economic-political issues, AI safety is a critical issue in human interaction with this technology. We are concerned about structural risks and psychosocial risks.
Sutherland revealed the anthropological landscape of these different groups with a constantly observing eye. “These are fields, but they are also landscapes. You have an AI safety officer and her AI safety field, and everyone parties together, and everyone knows each other, and everyone reads the same thing. Masu.”
The intellectual elite of the American northwest meet in a high-rise garden for a weekly reading session, drink at a house party, and debate which tech startup is most likely to accidentally start the apocalypse. I am tempted to fantasize about doing so.
After all, OpenAI’s mission statement is “To ensure that artificial general intelligence (AI systems that are generally smarter than humans) benefits all of humanity. ”
The technologies we're experimenting with at this point are as close to science fiction as you can get.
The largest corporations in today's global economy are puppets of divine creation. There is no denying that there is a need for calm, well-read, and ethical personnel who can advise these companies in the right direction, but what is “right” itself is an incredibly subjective concept. is.
While this position requires a lot of personal creativity, the reality of the workplace remains the same. When you work for a company, even if you're an ethicist, you're still part of an organization that requires you to be aware of brand positioning and company culture.
“When you join a company, you join other people and incentives. If you want a company to publish something highly critical about you, they'll do it for you, no questions asked.” It's a bit idealistic to think that way,'' Sutherland explained.
This paradigm includes finding supportive colleagues to back up your research and working well with company-specific editors.
Another layer of complexity is that while ethical challenges are already interconnected across different schools of thought, different companies prioritize different approaches. It is a good idea to research the ethos of a potential director before accepting a position. Depending on where they work, two ethicists may do very contrasting work.
On a lighter note, the nature of evil, the characteristics of good art, realism and idealism, money and power, corporate responsibility, and the psychedelic counterculture in a changing job market.
I'm excited about this future part. Getting out of bed and moving my fingers to write is not something I take for granted. With future prospects so high, I expect there will be a massive refocus on what we consider valuable.
Sutherland acknowledges that the industry is moving quickly. But he cautioned against interpreting the intimidating pace as disempowering. “What I saw inside is that one brilliant person can change the course of history right now. So something can be done. It's up to us to make that happen. .”
Victoria Frank is a third year student writing about the inevitable future of AI with a focus on ethics and wellbeing. Her column “Natural Intelligence” appears every other Friday.