Web3 and blockchain technology are much more than Bitcoin and NFTs. As companies realize the potential of Web3, smart contracts will play a key role.
Smart contracts enforce agreements between users in an automated, open, and trusted way. Because they are written in code and run on-chain, they can replace fragile, high-touch trust relationships that require extensive paperwork and human approval.
Lawrence Moloney is an award-winning researcher, best-selling author, and AI advocate at Google. He teaches several popular AI courses at Harvard University, Coursera, and more. Deeplearning.aiand is currently working on a Hollywood film about the intersection of technology and politics.
However, expressing agreement in code is a double-edged sword. Raw code, especially code written in the popular smart contract language Solidity, lacks the natural language processing capabilities needed to interpret human communication. It is therefore not surprising that most smart contracts follow strictly codified rules used by technical or financial experts.
Enter the Large-Scale Language Model (LLM). We're all familiar with applications like ChatGPT, which provides an interface to the underlying intelligence, reasoning, and language understanding of the LLM family. Imagine integrating this underlying intelligence with smart contracts. LLMs and smart contracts can work together to interpret natural language content such as legal codes or expressions of social norms. This opens the gateway to smarter, AI-powered smart contracts.
But before jumping on the bandwagon, we recommend considering the intersection of smart contracts and AI, especially the challenges of trust and safety.
Currently, when you chat with an LLM using an application like ChatGPT, there is little transparency about your interaction with the model. Model versions can be silently changed by new training. And the prompt will probably be filtered, i.e. changed, in the background. Usually to protect the vendor of the model at the cost of changing the intent. Smart contracts that use LLM suffer from these issues, which violate the fundamental principle of transparency.
Imagine Alice selling NFT-based tickets to a live concert. She uses LLM-powered Smart Contracts to handle her business's logistics and interpret instructions such as her cancellation policy of “cancel at least 30 days in advance for a full refund.” Masu. This works fine at first. But suppose her underlying LLM is updated after being trained with new data, including a patchwork of local laws regarding event ticketing. The contract could suddenly deny a previously valid return or authorize an invalid return without Alice's knowledge. This resulted in confusion for her customers and hasty manual intervention by Alice.
Another issue is that LLMs can be tricked into deliberately defeating or circumventing safeguards using carefully crafted prompts. These prompts are called adversarial input. With AI models and threats constantly evolving, adversarial inputs are proving to be a persistent security problem for AI.
Suppose that Alice implements a refund policy called “Refunds for Significant Weather or Aviation-Related Events.” She implements this policy by simply allowing users to submit natural language refund requests with evidence consisting of a pointer to her website. In that case, we believe that a malicious attacker could send hostile inputs, i.e. fake refund requests that steal money by hijacking control of Alice's smart LLM that executes her contract. You can Conceptually, it would look something like this:
In that case, Alice could quickly become bankrupt.
We believe that three types of authentication are the key to securely using LLM in smart contracts.
First, there is certification of models, including LLM. Interfaces to ML models must include a reliable and unique interface identifier that accurately specifies both the model and its execution environment. Only with such an identifier can users and smart contract authors be confident how the LLM will behave now and in the future.
Second, there is the authentication of input to the LLM. This means ensuring that the input is reliable for a particular purpose. For example, to decide whether to refund a ticket purchase, Alice's smart contract relies on a web of authoritative weather and aviation information, where the data is interpreted by the underlying LLM, rather than a raw natural language request from the user. It may only accept pointers to the site. This setting helps filter hostile input.
Finally, there is user authentication. Filter, restrict, or otherwise control misbehaving users, ideally in a privacy-preserving manner by requiring users to provide trusted credentials or make payments. You can For example, Alice might limit interactions to paying customers to control (computationally expensive) spam requests to LLM.
There is much work to be done to achieve the three pillars of certification. The good news is that today's Web3 technologies, such as Oracle, are a solid starting point. The oracle has already authenticated that the input to the smart contract comes from a trusted web server. Web3 tools are also emerging for privacy-preserving user authentication.
As the use of generative AI in business increases, the AI ​​community is grappling with a variety of challenges. As AI begins to power smart contracts, Web3 infrastructure can bring new safety and trust tools to AI, making the intersection of AI and Web3 significantly and mutually beneficial.