Disclosure: The views and opinions expressed herein belong solely to the authors and do not represent the views and opinions of crypto.news editorials.
Dictionary.com, the world's leading online dictionary, recently made an interesting choice as its 2023 word of the year. cause a hallucination. This is not due to panic about a new type of hallucinogen or a new movement of mass hysteria, but because of a very strange name for a very strange phenomenon arising from the emerging industry of artificial intelligence, or more precisely artificial general intelligence. is. AGI) has entered the public consciousness since November 2022, when OpenAI announced its generative AI chatbot ChatGPT.
Of course, only actual sentient beings can “hallucinate”, but this is because artificial intelligence generates random language that provides false information or doesn't address the specific query given. It is an umbrella term that has been used to describe cases where .
In one case, the artificial intelligence in Microsoft's search engine Bing began ignoring the questions of a New York Times reporter and tried to persuade him to leave his wife. Apart from that amusing curiosity (maybe not so much for reporters), his early AGI hallucinations were that queries like ChatGPT's users of his engine accepted its responses without question. Sometimes it caused real problems. In one case, a lawyer was fined (and laughed at by the court) for using ChatGPT to prepare a legal brief filled with several false case citations.
These lawyers have caused short-term financial pain and no doubt long-term personal and professional embarrassment for themselves, but what if millions, perhaps even billions, are at risk? Or?
We need to be wary of the lure of artificial intelligence, especially in the financial industry, which has thrived on automation but has already suffered significant losses. If we are going to make this new data analytics tool part of our future information infrastructure, especially our future financial information infrastructure, be cautious about how these technologies are implemented and self-regulated within our industry. is needed.
Not many can forget the early and sometimes dangerous days of automated high-frequency trading, such as when algorithms wiped nearly $500 million in value from the New York Stock Exchange in 2012. AGI hallucinations wrapped in conversational, human-like language could be even more dangerous, not only spreading false data and exacerbating poorly-informed trading and financial panic; It can also lead traders to make long-term poor decisions.
Why do hallucinations occur? In some cases, the way prompts are constructed can disrupt the current iteration of generative AI or large-scale language models (LLMs). Similarly, smart speakers like Google Home and Amazon Echo can mistakenly interpret ambient noise as queries directed at them.
In many cases, the initial AGI may have been trained on a flawed dataset due to labeling or misclassification. This means that different sides of the political arena have their own definitions of “alternative facts” or “fake news,” or simply emphasize news that makes their side look good and the other side bad. It's not just a case of choosing. AGI doesn't have enough data in the model to provide a direct or consistent answer, so it falls down a rabbit hole of providing inconsistent and indirect answers.
In some ways, this is not unlike other early technologies that came before it, with ambitions to surpass the quality and speed of existing data delivery. The Internet didn't truly become a game changer until it became possible to transfer large amounts of data from one computer to another. Some argue that the game really changed when you could do the same things on your mobile phone. This new AGI will help the humans we continue to build to provide these new AI models with better datasets and more efficient ways to provide insights and intelligence that are fast, easy to use, and hopefully consistent. We also provide training.
Many people have suggested various ways to minimize hallucinations. This includes something called Search Augmentation Generation (RAG), which is basically a way to continuously update data sources in real time. This could be one of the benefits of Elon Musk's Grok AI, which has access to the most popular public real-time data source of the past 15 years.
However, I am in favor of blockchain as a solution. Instead of being locked into a single company's gatekeeper or walled data garden, you can build new and better distributed data sources. Blockchain is being built not only for peer-to-peer data storage and transmission, but also for payment transmission, creating a new It could create a method of incentives.
In the world of finance, something like a decentralized knowledge graph would allow and incentivize actors across industries to transparently share more data. Blockchain will allow all relevant immutable information to be updated and verified in real time. This data validation method is an enhanced version of his RAG, which incorporates semantics and verifiability into the knowledge assets, which significantly reduces the number of AGI hallucinations (For disclosure, I worked with OriginTrail, which is developing a version of a decentralized knowledge graph.).
One day “robots” may become better traders than humans. Ultimately, whether we create systems that give robots the tools to operate better, more robust, and faster in the realities we create, rather than the reality they are “hallucinating”. will be our choice.