Last week, while moderating a talk on artificial intelligence, LaTanya Sweeney posed a thought experiment. Please imagine what she will be like 3 or 5 years from now. AI companies continue to collect data from the internet to feed large-scale language models. But unlike today's Internet, which is largely human-generated content, most of the future Internet's content will be generated by…large-scale language models.
The Harvard Kennedy School professor suggested that this scenario is not far-fetched, given the explosive growth in generative AI over the past two years.
Sweeney's panel discussion was part of a day-long symposium on AI hosted by FAS last week, which considered questions such as: How are generative AI technologies like ChatGPT disrupting what it means to own your work? How can we leverage AI thoughtfully while maintaining academic and research integrity? ? How good will these large language model-based programs be? (Very, very good.)
“Here at FAS, we are uniquely positioned to explore the questions and challenges that arise from this new technology,” said Hopi Hoekstra, Edgerly Family Dean of the College of Humanities and Sciences, in her opening remarks. “Our community is full of great thinkers, curious researchers, and knowledgeable academics who are leveraging their diverse expertise to tackle the big questions in AI, from ethics to social impact. can.”
In an all-student panel discussion, philosophy and mathematics expert Chinmay Deshpande ’24 likened the current moment to the advent of the Internet and how that innovative technology is giving academic institutions a new way to test knowledge. I explained how it forced me to reconsider. “Regardless of what AI looks like in the future, I think it's clear that it's starting to have an impact qualitatively similar to the impact of the internet,” Deshpande said. “When we think about pedagogy, we need to think about AI in some similar ways.”
Naomi Bashkansky ’25, a computer science expert and master’s student who studies AI safety issues with other students, asked Harvard University to teach her the basics of large-scale languages. It called for thought leadership on the impact of an AI-saturated world by offering courses that integrate. Apply the model to subjects such as biology and writing.
Kevin Wei, a student at Harvard Law School, agreed.
“We haven't done enough to address how the rise of generative AI systems will change the world, especially how the economy and labor market will change,” Wei said. “Whatever Harvard can do to play a leading role in that, we look forward to a greater role for the university…in consultation with government, academia, and civil society.”
The day began with a unique scholarship panel discussion co-hosted by the Mahindra Humanities Center and the Edmund and Lily Safra Center for Ethics. Panelists considered the ethics of authorship in an era of instant access to information and blurring lines between citation and copyright, and how those considerations differ across disciplines.
David Joselit, Arthur Kingsley Professor of Art, Film and Visual Studies, said the challenges posed by AI have precedent in the history of art. The concept of “author” is undermined in modern times, as artists have focused on the concept of what is important as a work of art, rather than the physical work. “AI seems to me to be a mechanization of that kind of copyright distribution,” Joselit says. He raised the idea that AI should be understood “not just as a tool, but as its own genre.”
Another topic of the symposium included a review of Harvard University Libraries' law, information policy, and AI research studies that shed light on how students are using AI in their academic work. Administrators across FAS also shared examples of how they are experimenting with AI tools to improve productivity. Bok Center panelists shared how AI has been leveraged in education this year, and Harvard Information Technology offered insight into the tools it is building to support instructors.
Across the first floor of the Northwest Building where the symposium was held, a poster fair was held to conclude the final project of Sweeney's course, “Technology Science to Save the World,” where students use science experiments and technology to solve real-world problems. We explored how it can be used. -World problems. Some of the posters include “Viral or Volatile? TikTok and Democracy” and “Campaign Advertising in the Age of AI: Can Voters Tell the Difference?”
Students in their first general education class, “Rise of the Machines?” concluded the day by sharing their final projects that demonstrate current and future aspects of generative AI.
Source link