Meta has introduced the third generation of its Llama AI model. The current versions have 8 billion and 70 billion parameters, and a larger model with 400 billion parameters will follow, which is still undergoing training.
The dataset for training Llama 3 is seven times larger than for last year's Llama 2, claims Meta. It contains four times as much code. In addition, the tokenizer for encoding language is much more efficient, which should lead to better performance. Meta also uses something called 'grouped query attention' to improve efficiency. Among the training data, 95 percent is in English, which leads to reduced performance in the thirty other languages that Meta AI supports. Starting now, Llama 3 will be integrated into services such as WhatsApp and Instagram, as an AI assistant, claims the company. For the time being, this will only be in English, in the United States, Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe.