Meta Ramps Up AI Portfolio with Four New Models and Advanced Research Artifacts!

  • Editor
  • July 19, 2024
    Updated
Meta-FAIR-Unveils-Four-New-AI-Models-and-Additional-Research-Artifacts

Meta FAIR has recently announced the release of four innovative AI models and various research artifacts, marking a big step forward in developing open AI ecosystems.

These advancements are poised to inspire innovation within the AI community and contribute to the responsible advancement of AI technologies.

Meta Chameleon

Meta Chameleon includes 7B and 34B language models that support mixed-modal input with text-only outputs.

This family of models can combine text and images as input and output any combination of text and images with a single unified architecture for both encoding and decoding.

Meta Chameleon’s use of tokenization for text and images, rather than diffusion-based learning, enables a more unified approach and makes the model easier to design, maintain, and scale.

Here’s what people have to say about this latest Meta AI announcement!

Key components of the Chameleon 7B and 34B models are being released under a research-only license, with steps taken to ensure responsible development while recognizing existing risks.

Meta Multi-Token Prediction

Meta Multi-Token Prediction is a pre-trained language model designed for code completion using multi-token prediction.

This new approach trains language models to predict multiple future words simultaneously, enhancing model capabilities, improving training efficiency, and allowing for faster speeds.

The pre-trained code-completion models are being released under a non-commercial/research-only license to enable independent investigation by the research community.

Meta JASCO

Meta JASCO, another new release, is a generative text-to-music model capable of accepting various conditioning inputs for greater controllability.

Unlike existing models that rely mainly on text inputs for music generation, JASCO can incorporate both symbolic and audio-based conditions in the same text-to-music generation model.

This allows for improved control over generated music outputs. The research paper and a sample page are available now, with the inference code and the pre-trained model to be released soon.

Meta AudioSeal

Meta AudioSeal is an audio watermarking model designed specifically for the localized detection of AI-generated speech and is available under a commercial license.

This model revamps classical audio watermarking by focusing on the detection of AI-generated content rather than steganography, enhancing detection speed by up to 485 times compared to previous methods.

AudioSeal achieves state-of-the-art performance in terms of robustness and imperceptibility.

Responsible AI Artifacts

Meta is also releasing several Responsible AI (RAI) artifacts, including research, data, and code aimed at measuring and improving the representation of geographical and cultural preferences and diversity in AI systems.

The company emphasizes that access to state-of-the-art AI should be available to everyone, not just a few Big Tech companies, and is eager to see how the community will utilize these technologies.

PRISM Dataset

In addition to these models, Meta FAIR is supporting the release of the PRISM dataset, which maps the sociodemographics and stated preferences of 1,500 diverse participants from 75 countries.

This dataset aims to improve large language models (LLMs) by focusing on subjective and multicultural perspectives, demonstrating the importance of diverse feedback in AI development.

Geographical Disparities in Text-to-Image Models

Meta has developed automatic indicators called “DIG In” to evaluate potential geographical disparities in text-to-image models. This initiative includes a large-scale annotation study to understand how people in different regions perceive geographic representation.

Meta’s collaborative approach extends to mentoring graduate students in follow-up evaluations, introducing methods to improve the diversity of outputs from text-to-image models through contextualized guidance.

For more than a decade, Meta’s Fundamental AI Research (FAIR) team has focused on advancing the state of the art in AI through open research.

By maintaining an open science approach and sharing work with the community, Meta aims to build AI systems that work well for everyone and bring the world closer together.

The recent releases highlight Meta’s commitment to openness, collaboration, excellence, and scale, fostering an ecosystem where AI technologies are accessible and beneficial to a broad range of researchers and developers.

Enhance your understanding of how AI can innovate in the realm of digital art by checking out our analysis in ‘Claude-like Artifacts in POE‘, where we delve deep into the creation of unique visual expressions through AI.

For more trends and insights into AI, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *