AI Regulation — EU AI Act and Global Frameworks
The EU AI Act (2024) is the world's first comprehensive AI regulation law. It classifies AI systems by risk: prohibited (social scoring, real-time biometric surveillance), high-risk (medical devices, hiring, credit scoring), limited risk (chatbots — disclosure required), and minimal risk (spam filters). Understanding compliance requirements is now essential for AI engineers.
EU AI Act Compliance Framework
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# EU AI ACT -- Risk Classification (in force August 2024)
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
eu_ai_act_risk_levels = {
"PROHIBITED (Article 5)": {
"deadline": "Feb 2025",
"examples": [
"Social scoring by governments",
"Real-time biometric identification in public spaces (with narrow exceptions)",
"AI that exploits vulnerabilities (children, elderly, disabled)",
"Subliminal AI manipulation techniques",
"Predictive policing based solely on profiling",
],
"penalty": "Up to 35 million EUR or 7% of global annual revenue",
},
"HIGH RISK (Annex III)": {
"deadline": "Aug 2026",
"examples": [
"AI in medical devices and diagnostics",
"Automated hiring decision systems (CV screening, interviews)",
"Credit scoring and insurance risk assessment",
"AI for critical infrastructure (power grids, water, transport)",
"Educational assessment (exam grading, admission)",
"Law enforcement and border control AI",
],
"requirements": [
"Mandatory human oversight",
"Transparency and documentation (technical documentation)",
"Pre-deployment conformity assessment (CE marking equivalent)",
"Bias testing and fundamental rights impact assessment",
"Registration in EU database",
"Post-market monitoring and incident reporting",
],
"penalty": "Up to 15 million EUR or 3% of global annual revenue",
},
"LIMITED RISK": {
"deadline": "Aug 2025",
"examples": ["Chatbots (must disclose AI to users)", "Deepfake content (must be watermarked)"],
"requirements": ["Transparency obligation -- users must know they're talking to AI"],
},
"MINIMAL RISK": {
"examples": ["AI spam filters", "AI-powered video editing", "Product recommendation engines"],
"requirements": ["Voluntary Code of Conduct"],
},
"GENERAL PURPOSE AI (GPAI)": {
"deadline": "Aug 2025",
"applies_to": "Foundation models / LLMs used for general purposes",
"requirements": [
"Technical documentation (architecture, training data, capabilities)",
"Copyright compliance for training data",
"Models with > 10^25 FLOPs training compute: additional systemic risk requirements",
],
"systemic_risk_models": ["GPT-4, Gemini Ultra, Claude 3 Opus (estimated)"],
},
}
# COMPLIANCE CHECKLIST for a high-risk AI system
high_risk_checklist = {
"Risk management system": "Documented throughout lifecycle, continuous monitoring",
"Data governance": "Training data documented, bias checked, relevant, representative",
"Technical documentation": "System design, capabilities, limitations, intended use",
"Logging": "Automatic logging sufficient to trace post-deployment issues",
"Transparency to users": "Clear disclosure when interacting with AI, what data is used",
"Human oversight": "Humans can understand, monitor, and override AI decisions",
"Accuracy specifications": "Clearly documented accuracy metrics and known failure modes",
"Robustness and security": "Adversarial robustness testing, cyber-security measures",
"Conformity assessment": "Internal or third-party audit before deployment",
"Registration": "Register in EU database at database.euaiact.eu",
}
print("EU AI Act Compliance Checklist for High-Risk AI Systems:")
for requirement, description in high_risk_checklist.items():
print(f" [ ] {requirement:30s}: {description}")
# GLOBAL FRAMEWORKS beyond EU
global_frameworks = {
"EU AI Act (2024)": "Most comprehensive, risk-based, mandatory compliance",
"NIST AI RMF (2023)": "USA voluntary framework: GOVERN-MAP-MEASURE-MANAGE",
"UK AI Pro-Innovation (2023)": "Sector-specific guidance, lighter touch than EU",
"China AI Regulations (2023)": "Mandatory for generative AI services in China",
"G7 Hiroshima AI Process": "Voluntary international principles for LLMs",
"ISO/IEC 42001 (2023)": "AI management system standard (like ISO 27001 for AI)",
}
print("\nGlobal AI Regulatory Frameworks:")
for framework, description in global_frameworks.items():
print(f" {framework:35s}: {description}")Tip
Tip
Practice AI Regulation EU AI Act and Global Frameworks in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Modern NLP = Transformer-based. Pre-train, then fine-tune.
Practice Task
Note
Practice Task — (1) Write a working example of AI Regulation EU AI Act and Global Frameworks from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with AI Regulation EU AI Act and Global Frameworks is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready ai code.