Meta presents LLaMA 3.2 with multimodal capabilities
Meta CEO Mark Zuckerberg announced a major upgrade to the company's AI family on Wednesday, introducing LLaMA 3.2, a multimodal open-source model.
The latest version supports both text and image inputs and has 90 billion and 11 billion parameter options, surpassing its predecessor LLaMA 3.1, which was limited to 70 and eight billion parameters and text-only functionality.
The new model aims to enhance Meta's AI ecosystem, particularly in natural user interactions. Meta CEO Mark Zuckerberg highlighted the integration of natural voice capabilities into Meta AI, predicting that voice will soon become a primary mode of interacting with AI. He also emphasized the open-source nature of LLaMA, describing it as a "cost-effective" solution for developers.