Written by 1:07 PM Tech

Meta Begins Development of Next-Generation AI Models ‘Mango’ and ‘Avocado’ [Junior Electronics]

Meta Platforms is advancing in developing next-generation AI models specialized in the fields of images, video, and text. The Wall Street Journal reported on the 18th (local time) that Meta is preparing an AI model centered on images and videos, codenamed “Mango,” and a next-generation text-based large language model (LLM) codenamed “Avocado.” Both models are expected to be released in the first half of 2026.

Alexandre Wang, Meta’s Chief AI Officer (CAIO), mentioned these plans during a recent internal Q&A session with Chief Product Officer (CPO) Chris Cox. Wang explained that one of Avocado’s main goals is to enhance coding capabilities, and he stated that the development of a “world model” that learns from visual information to understand the surrounding environment is also in its early stages.

This summer, Meta reorganized its AI team and brought in CAIO Wang to lead the newly established “Meta Superintelligence Research Lab.” CEO Mark Zuckerberg has actively been acquiring talent, hiring over 50 new researchers and engineers, including more than 20 former OpenAI researchers.

In the AI industry, image and video generation have become key competitiveness factors. Google introduced its AI image generation model “NanoBanana,” and in September, Meta launched the video generation service “Vibe” in collaboration with startup Midjourney. Meanwhile, OpenAI has also intensified the competition by unveiling its AI video generation model “Sora.”

Visited 1 times, 1 visit(s) today
Close Search Window
Close
Exit mobile version