Towards the end of the keynote, Google talks about its open source model, Gemma, and broader safety considerations.
Towards the end of the keynote, Google will discuss a set of open source AI models called Gemma.
This is a very different approach than the main Gemini AI model.
Gemini's offering is tightly closed. External developers cannot see the code behind Gemini or the weights Google uses to build these models. You can just use Gemini off-the-shelf, and for enterprise use cases you should actually use Google's cloud.
Compare this to Meta, which has open sourced most of its Llama models. It's much more open than Google. This means it's not very clear how Meta will profit from his huge investment in the Llama model.
Google's path is clearer. Subscribers and cloud users only have to pay cash to use these tools. This is one reason why the Gemma open source model seemed like an afterthought at the end of the IO keynote. Now, Meta has the potential to rally a large community of developers around the Llama model. It may pay off in the future. But for now, most of the top AI companies are taking a closed path in the AI field. Ironically, including “open” AI.
In terms of safety, Google said it “red teams” AI updates to stress test them for vulnerabilities before release.
If you want to fight misinformation and increase security, Google's SynthID can help you watermark your AI images.