Meta Releases Latest AI Model, Adds Free Chatbot to Platforms

Facts

  • Meta unveiled on Thursday the first two versions of its Llama 3 large language model (LLM), which the company claims can outperform much larger models including Google's trillion-parameter Gemini Pro.1
  • The just-launched versions of the model were built with 8B and 70B parameters, a measure that indicates how much data the system is trained on. A bigger 400B-parameter model has yet to be rolled out.2
  • This comes as several companies, including startups and established tech giants, have entered the artificial intelligence (AI) race since OpenAI launched its ChatGPT chatbot in late 2022.3
  • Llama 3 has been used to upgrade the company's smart assistant software Meta AI, which began to be incorporated across its platforms — including Facebook, Instagram, and Whatsapp — on Thursday.4
  • Meta AI is now available in English in more than a dozen countries outside of the US, such as Australia, Canada, Jamaica, New Zealand, Pakistan, Singapore, and South Africa.5
  • The chatbot has recently encountered issues with its responses, claiming to have a child that attends a real — and very specific — public school for the gifted and talented in a parenting group and refusing to generate images of an Asian man with a white woman.6

Sources: 1The Register, 2Associated Press, 3Wall Street Journal, 4New York Times, 5Forbes and 6Daily Mail.

Narratives

  • Narrative A, as provided by Aibusiness. As Meta has begun to launch its Llama 3-powered chatbot across apps, users will now be able to ask for restaurant recommendations and access real-time information without having to bounce to a separate page. Additionally, high-quality generated images — and gifs — will be available for free, featuring a watermark label to prevent deep fakes.
  • Narrative B, as provided by Washington Post. Given that Meta's social media platforms have long been fertile ground for misinformation and extremist content, the introduction of a chatbot can only aggravate that issue due to its tendency for 'hallucinations' and false responses. It's not because a technology is new that its potential harm has to be accepted.

Predictions