BY Fast Company 2 MINUTE READ

Google announced recently set of new large language models, collectively called “Gemma,” and a return to the practice of releasing new research into the open-source ecosystem. The new models were developed by Google DeepMind and other teams within the company that already brought us the state-of-the-art Gemini models.

The Gemma models come in two sizes: one that is comprised of a neural network with 2 billion adjustable variables (called parameters) and one with a neural network with 7 billion parameters. Both sizes are significantly smaller than the largest Gemini model, “Ultra,” which is said to be well beyond a trillion parameters, and more in line with the 1.8B- and 3.25B-parameter Gemini Nano models. While the Gemini Ultra is capable of handling large or nuanced requests, it requires data centers full of expensive servers.

The Gemma models, meanwhile, are small enough to run on a laptop or desktop workstation. Or they can run in the Google cloud, for a price. (Google says its researchers optimized the Gemma models to run on Nvidia GPUs and Google Cloud TPUs.)

The Gemma models will be released to developers on Hugging Face, accompanied by the model weights that resulted from pretraining. Google will also include the inference code and the code for fine-tuning the models. It is not supplying the data or code used during pretraining. Both Gemma sizes are released in two variants—one that’s been pretrained and the other that’s already been fine-tuned with pairs of questions and corresponding answers.

But why is Google releasing open models in a climate where state-of-the-art LLMs are hidden away as proprietary? In short, it means that Google is acknowledging that a great many developers, large and small, don’t just build their apps atop a third-party LLM (such as Google’s Gemini or OpenAI’s GPT-4), but that they access via a paid API, but also use free and open-source models at certain times and for certain tasks.

The company may rather see non-API developers build with a Google model than move their app to Meta’s Llama or some other open-source model. That developer would remain in Google’s ecosystem and might be more likely to host their models in Google Cloud, for example. For the same reasons, Google built Gemma to work on a variety of common development platforms.

There’s of course a risk that bad actors will use open-source generative AI models to do harm. Google DeepMind director Tris Warkentin said during a call with media on Tuesday that Google researchers tried to simulate all the nasty ways that bad actors might try to use Gemma, then used extensive fine-tuning and reinforcement-learning to keep the model from doing those things

FastCompany