Google DeepMind has announced the addition of two new models to its Gemma lineup, enhancing tools for machine learning developers and researchers. The new models, CodeGemma and RecurrentGemma, are designed to provide state-of-the-art, efficient solutions in code generation and research experimentation.
With @GoogleCloud, we’re releasing CodeGemma & RecurrentGemma: two new additions to the Gemma family of lightweight, state-of-the-art open models.
These focus on empowering developers and researchers, helping them to build responsibly. → https://t.co/m1oSwFChDt #GoogleCloudNext pic.twitter.com/rtJ10nQm2M
— Google DeepMind (@GoogleDeepMind) April 10, 2024
As Google DeepMind rolls out CodeGemma and RecurrentGemma, the AI community’s response offers a valuable lens through which to gauge these innovations. For a comprehensive view of user and expert opinions, explore our coverage on community reactions to Gemma 2, providing insights into the real-world implications of these tools.
In an update that builds on the February release of the Gemma model family, Google DeepMind introduced CodeGemma and RecurrentGemma, promising to enhance the coding and research capabilities of ML developers.
This announcement follows a wave of positive feedback from the community, which has seen the Gemma models integrated into various applications and tools.
Today, we’re excited to announce our first round of additions to the Gemma family, expanding the possibilities for ML developers to innovate responsibly: CodeGemma for code completion and generation tasks as well as instruction following, and RecurrentGemma, an efficiency-optimized architecture for research experimentation. Plus, we’re sharing some updates to Gemma and our terms aimed at improvements based on invaluable feedback we’ve heard from the community and our partners.
CodeGemma emerges as a versatile tool for developers, offering models specifically tailored for code completion and generation.
Available in a 7B pre trained variant for general coding tasks and a 7B instruction-tuned variant for more complex code interactions, CodeGemma aims to streamline the development process. It also features a 2B variant designed for quick, efficient code completion suitable for local machines.
The benefits of CodeGemma include:
Advanced Code Generation: Automatically complete lines or blocks of code, improving productivity and reducing errors.
High Accuracy: Trained on extensive data, including web documents and code, ensuring both syntactical and semantical precision.
Support for Multiple Languages: Assists with coding in various languages like Python, JavaScript, and Java, adapting to diverse development needs.
RecurrentGemma: Tailored for Research Efficiency
People do have some questions, though:
Are the models available via the API? I’m currently having a great time with Gemini 1.5 Pro.
— Robosquad (@human_for_now) April 10, 2024
RecurrentGemma introduces a new architecture that optimizes memory use and increases throughput, which is particularly beneficial for research environments.
Using recurrent neural networks and local attention mechanisms, this model is especially suited for generating longer sequences on devices with limited computational power.
Advantages of RecurrentGemma include:
Lower Memory Usage: Efficient memory management allows for the processing of larger data sets on constrained devices.
Increased Throughput: Capable of larger batch sizes, RecurrentGemma accelerates token generation, especially in extensive sequence operations.
The introduction of these models marks a significant step forward in the capabilities offered to the ML community, underpinning Google DeepMind’s commitment to responsible innovation and community-driven development.
The official Google AI team said in an official blog post: We invite you to try the CodeGemma and RecurrentGemma models and share your feedback on Kaggle. Together, let’s shape the future of AI-powered content creation and understanding.
Google DeepMind remains dedicated to refining these models, with ongoing updates influenced by community feedback and collaboration with partners. This approach ensures that the Gemma family continues to meet the evolving needs of developers and researchers worldwide.
For more of such news visit our AI news at allaboutai.com.