With the 2024 US election just seven months away, US officials are warning that the age of AI poses a far greater threat than social media of the past two decades. But few people have a more personal connection to the topic than Hillary Clinton.
At an event on AI and global elections hosted by the Aspen Institute and Columbia University last week, the former secretary of state and presidential candidate said that AI poses a “totally different level of threat” and that foreign-based attacks on Facebook and Twitter pose a “completely different level of threat.” He said that past efforts will become completely invisible. It is “primitive” when compared to AI-generated deepfakes and other AI-generated content.
“They had all kinds of videos of people who looked like me, but not me,” Clinton told the audience. “So I'm concerned because it's not funny to put out defamatory videos about you, I can tell you that. But when you have them in such a way that you can't tell them apart… I don’t even know if it’s true or not.”
The event brought together top leaders from the U.S., Europe, and state governments, as well as top experts in the world of AI and media. Some speakers highlighted various issues and also suggested possible solutions. Michigan Secretary of State Jocelyn Benson said tech companies and the government should build new guardrails and educate people on how to avoid being fooled by misinformation. (Her state recently passed new laws related to AI and election-related misinformation, which prohibit the use of false information and require disclosure of all information generated by AI.) Masu.)
“There are opportunities for us there, but also real challenges,” Benson said. “…we, as critical consumers of information, need to know enough about what to do, where to go, how to verify, and where to do it when our citizens receive a text message. must be recognizable. [to find] A voice you can trust. ”
Businesses and governments are better prepared for online misinformation than they were in 2016, said Anna Makanju, a former Obama administration national security expert and vice president of international affairs at OpenAI.
“We're not tackling the same kinds of problems as AI companies,” Makanju said. “We have a responsibility to generate AI content, not distribute it. But we need to work beyond that chain.”
Some speakers, including Clinton, Google co-founder Eric Schmidt and Rappler CEO Maria Ressa, also called on Congress to reform Section 230 of the Communications Decency Act. Ressa, a journalist who won the Nobel Peace Prize in 2021, also pointed out that it is difficult to know what it is like to be a victim of online harassment and misinformation until you are attacked.
“The biggest problem we have is that there is impunity,” Ressa said. “Stop impunity. Tech companies will say they will self-regulate. [But a good example] Information from the press — we weren't just self-regulating, we had legal boundaries. [that] If we lie, you sue us. At the moment there is absolute impunity and America has not passed anything. I joked that the EU won the tortoise race to introduce legislation to help us. Too slow for the lightning pace of technology. We will pay the price. ”
In Clinton's comments during the same conversation about Section 230, she said, “It's shameful that we're still sitting around and talking about it.”
“Technology companies — and I'm obviously talking about social media most of the time — need a different system by which their platforms operate,” Clinton said. “I think they will continue to make huge profits if they change their algorithms to prevent the kind of harm caused by people being sent to the lowest common denominator every time they log on. We have to stop paying for content.”
Below is a snapshot of what top speakers said during the half-day event.
- Former Secretary of Homeland Security Michael Chertoff: “In this day and age, we need to see the internet and information as a realm of conflict. How do we differentiate between deepfakes and teach people to tell them from the real thing?” We want to avoid being fooled by deepfakes. But I'm worried about the opposite. In a world where people are informed about deepfakes, do they think everything is deepfakes? It just gives dictators and corrupt government leaders permission to do whatever they want. ”
- Eric Schmidt, co-founder and former CEO of Google, said: “Information, and the information space we live in, cannot be ignored. I've given speeches before, but do you know how to solve this problem? Turn off your phone, step away from the Internet, Have dinner with your family and live a normal life. Unfortunately, in my industry, it was impossible to escape all of this. As a normal human being, you are exposed to all of this horrible filth and more. Ultimately, this will be fixed either through industry cooperation or regulation. A good example here is TikTok, where certain content spreads more than others. We can argue about that. TikTok isn't really social media. TikTok is just television. And when you and I were young, something big happened. [debate] About how to regulate television. There is something called the equal time rule, which states that it is okay to present one side even if the other side is presented approximately equally, with a rough balance. By doing so, society will solve these information problems. Failure to do so will make the situation even worse. ”
- David Agranovich, Director of Global Threat Disruption at Meta, said: “These operations are increasingly happening cross-platform, cross-internet, and responsibility is becoming more distributed. Platform companies need to share information with groups that can take meaningful action, rather than across the various affected platforms. We have a responsibility to share. The second big trend is that these practices are becoming increasingly commercial. It democratizes the tools and hides who is paying for the tools. It makes it very difficult to hold threats accountable.”
- Federal Election Commissioner Dana Lindenbaum: “Despite its name, the Federal Election Commission actually only regulates campaign finance laws and federal elections: inflows of funds, outflows of funds, and transparency. We currently regulate We are going through a petition process to determine whether the legal regulations can be amended.'' regulations, and whether there is a role for the FEC in this area. Our language is very clear, but very limited. Even if we could regulate it here, it's really just bad behavior between candidates…Congress could expand our limited jurisdiction. If you had asked me years ago whether it was possible for Congress to regulate the campaign field and reach truly bipartisan agreement, I would have laughed. But it's pretty incredible to see the widespread fear of what could happen here. We recently held oversight hearings where members on both sides of the aisle expressed serious concerns, and while we don't expect anything to happen ahead of November, we do see changes coming. ”
Prompts and Products: AI News and Announcements
- Amazon announced it will invest an additional $2.75 billion in AI startup Anthropic, bringing the e-commerce giant's total investment in the OpenAI competitor to $4 billion. The investment comes two months after the Federal Trade Commission launched an investigation into Anthropic and OpenAI looking into the startup's relationship with the large technology companies that fund it.
- IBM debuted a new AI-focused campaign called “Trust What You Create,” highlighting both the potential risks of AI and how to prevent encountering them. The company also announced updates to enable markets to use generative AI in the content supply chain.
- The World Federation of Advertisers has announced a new “AI Community” to help advertisers interact with the AI they generate. Steering committee members include executives from a variety of brands including IKEA, Diageo, Kraft Heinz, The LEGO Group, Mars and Teva Pharmaceuticals.
- Brandtech Group announced it has raised $115 million in a Series C investment round to strengthen the marketing holding company's generative AI efforts. Acquired AI content generator in 2023 pencil.
- In Google's 2023 Ad Safety Report, the company highlighted the impact of generative AI, including details on new risks, Google's latest policies, and how generative AI tools can be used in brand safety efforts. The company also included information about the types of harmful content it took action against in 2023.
- The BBC has announced it will stop using AI in its Doctor Who marketing, contrary to a few weeks ago, following complaints about its use in email and mobile notifications.
Quote from Humans: Q&A with Fiverr CMO Matti Yahav
As freelancers and their clients increase their interest in AI, freelance marketplaces are finding ways to ride the wave through new AI tools, categories, and advertising efforts.
The person responsible for marketing Fiverr's platform is Matti Yahav, who joined the company as CMO in November after years as CMO at Sodastream. In a recent interview, Yahav spoke to Digiday about the Israeli company's approach to marketing and how the platform will navigate the growth of AI. Below is a shortened and edited version of the conversation.
How does the approach to marketing a platform like Fiverr differ from the approach to marketing a physical product like Sodastream?
I think there are a lot of similarities, like how you build a brand and how you create demand. On the other hand…I was spending a lot of time thinking about what the sales floor would look like, what the packaging would look like, etc. In the case of consumer goods, these are more like specific marketing domains, which are less relevant when talking about marketplaces and software. There are a lot of similarities, but obviously there's also a learning curve, which I'm really looking forward to.
Fiverr added a number of new categories last year to accommodate the supply and demand of various AI tools. How do you market these to freelancers and potential customers? What trends are you seeing?
Freelancers are building AI applications that allow companies to integrate AI into activities such as chatbots. Another example is when an expert programmer suggests cleaning up the code generated by her AI. Artists are generating AI as prompt engineers. He also has a number of web development freelancers who offer services to create their own AI blog writing tools using ChatGPT and GPT-3. We also have consultations on what AI can do for small and medium-sized businesses.Probably the last one but super interesting [example] It's a fact check. As AI creates large amounts of data, we've seen on our platform that many people are searching for services like fact-checking. And you never know what's an illusion, what's wrong, and what's right.
Do you run paid media on generative AI chat or search platforms like Copilot or Google's Search Generative Experience?
Are we experimenting with them? surely. Are some of them implemented? It's a process. I hear that like a lot of marketers, we try not to use AI to say we're using AI. We're trying to find the right use case for us and figure out how to get the most out of it.