EU Parliament’s Vision for AI
For the EU Parliament, the underpinnings of any AI framework are safety, transparency, traceability, inclusivity, and sustainability. The notion is clear: human oversight trumps automated surveillance, mitigating potential adversities.
Moreover, the Parliament advocates for a consistent, technologically agnostic definition of AI, ensuring its adaptability to emerging AI variants.
Discover the Parliament’s meticulous blueprint for AI’s onward journey.
Deciphering the AI Act: Rules Aligned with Risk Grades
AI providers and users bear distinct responsibilities, governed by the inherent risk posed by the AI variant:
- Prohibited Risk:
Certain AI incarnations deemed detrimental to human well-being will face prohibition. This encompasses:
- AI-driven behavioural manipulation, like voice-responsive toys prompting hazardous actions in children.
- Social ranking methodologies based on behavioural, socio-economic, or personal traits.
- Immediate and remote biometric identification, e.g., facial recognition.
Noteworthy exceptions might prevail. Case in point: deferred remote biometric systems may secure legal permission for probing severe crimes, subject to judicial consent.
- Elevated Risk:
AI endangering safety or intrinsic rights falls under this tier, bifurcated into:
- AI integrated into products governed by EU’s safety standards, such as toys, aircraft, automobiles, medical gadgets, and elevators.
- AI within eight distinct sectors necessitating EU database registration:
- Natural person biometric identification.
- Critical infrastructure management.
- Educational and professional training.
- Employment and entrepreneurial access.
- Access to vital private and public amenities.
- Law enforcement strategies.
- Migration, asylum, and border control protocols.
- Assistance in legal comprehension and application.
All elevated-risk AIs undergo pre-market evaluation and consistent lifecycle assessments.
- Generative AI:
AIs resembling ChatGPT have to:
- Declare AI-origin content.
- Engineer against illicit content generation.
- Share synopses of copyrighted training datasets.
- Moderate Risk:
These AI systems demand basic transparency guidelines, empowering users with knowledge and choice. This encapsulates AI manipulating visual or auditory content, like deepfakes.
Progress Snapshot
The MEPs officially approved the Parliament’s standpoint on the AI Act on 14 June 2023. The dialogue will now segue to deliberations with the Council’s EU members, culminating in the law’s definitive structure.
The collective ambition remains to secure consensus by year’s end.