Recently, Meta has publicly released its largest and most powerful AI model yet – Llama 3 405B. With over 400 billion parameters, this model outperforms all previous Llama models and most other open-source models. But, based on the details provided in the EU AI Act regulation, Llama 3 405b would likely be considered a high risk AI system subject to certain obligations under the law.
Table of Contents
European Union AI Act
The EU aims to ensure trustworthy artificial intelligence with the AI Act. The Act defines rules for high-risk AI and aims to boost innovation and protect fundamental rights. The Act recognizes that some AI may present societal threats not just from how it’s applied in a single domain but from its sheer scale and capabilities alone. To address this, the regulation introduces the concept of “systemic risk.”
Meta Llama 3 405b is a Potential Systemic Risk
Under the AI Act, an AI system is considered high impact or “systemic risk” if its training exceeds 10^25 FLOP and its skills could be widely misappropriated. Such “high impact” models may face restrictions or bans if their societal costs outweigh the benefits after a review.
Meta Llama 3 405b required 3.8 × 10^25 FLOPs for pre-training. So, this makes it a potential systemic risk that requires notification to the Commission.
Provider Must Notify Commission
As the provider, Meta must notify the Commission of this 405b model within 2 weeks of its release. The notification allows Meta to present arguments that the model does not actually pose systemic risks despite meeting the criteria.
Commission Oversight on Llama 3 405b
The Commission then decides if the model indeed poses high-impact capabilities and is a systemic risk based on Meta’s arguments and potentially a qualified alert from independent experts. Strict oversight aims to ensure trustworthy and beneficial AI.
Concluding Thoughts
The EU AI Act establishes an important framework for assessing potential risks from large language models like Meta Llama 3 405b. Notification and Commission review help balance innovation with societal safeguards.
| Also Read Latest From Us
- Meet Codeflash: The First AI Tool to Verify Python Optimization Correctness
- Affordable Antivenom? AI Designed Proteins Offer Hope Against Snakebites in Developing Regions
- From $100k and 30 Hospitals to AI: How One Person Took on Diagnosing Disease With Open Source AI
- Pika’s “Pikadditions” Lets You Add Anything to Your Videos (and It’s Seriously Fun!)
- AI Chatbot Gives Suicide Instructions To User But This Company Refuses to Censor It