Regulation (EU) 2024/1689 · EU AI Act · Article 4
Direct answer: yes—Article 4 requires many organisations to ensure a sufficient level of AI literacy for staff (and other people operating AI on their behalf). But no, it does not mandate one official course or a single EU-wide certificate.
- Goal: informed use (limits, risks, verification and human oversight).
- Role-based: different teams need different depth.
- Evidence-ready: syllabus, materials, attendance, policies and QA checklists.
What Article 4 actually requires
Article 4 does not prescribe a fixed training format. It requires providers and deployers to take measures so that staff (and other people operating AI on their behalf) have AI literacy: enough knowledge and understanding to deploy/use AI in an informed way, understand risks, and prevent harm.
Who it applies to (not just “AI vendors”)
It applies to organisations that build AI and to those that use AI internally—marketing, support, HR, operations, finance, analytics, and automation workflows.
Common internal scenarios
- Marketing/sales: copy, proposals, product messaging, lead qualification.
- Customer support: chatbots, ticket triage, suggested replies.
- HR: internal documentation support, process assistance.
- Ops/finance: reporting, reconciliation, anomaly detection.
Also: include contractors or service providers who operate AI on your behalf.
• Compliance & Legal Tech · AI Training for Employees · AI Training for Users · AI Services
What “sufficient AI literacy” should include
There is no one-size-fits-all. A defensible minimum is role-based training + operational guidance + shared quality standards.
What AI you use, why, limitations, and where human oversight is required.
Hallucinations, bias, security, GDPR/data handling, IP, traceability.
Templates, checklists, and “how we review outputs here”.
What to do after a serious error, leak, or high-impact mistake.
How to prove compliance: keep an internal record
No EU-wide “AI Act certificate” is required. What matters is being able to show you took reasonable measures.
- Role-based syllabus and learning outcomes.
- Materials (guides, checklists, examples).
- Attendance log.
- AI use policy (data, IP, approvals, allowed channels).
- Human review rules and QA samples.
- Update cadence (tool changes, risk changes, periodic refresh).
High-risk use: stronger human oversight
When AI use is high-impact, you need stronger human oversight: clear accountable roles, authority to intervene, and practical procedures—not just generic training.
Timeline + action checklist
- 2 Feb 2025: general provisions apply (including AI literacy) + prohibitions.
- 2 Aug 2026: most obligations apply and enforcement starts.
- 2 Aug 2027: extended timelines for certain high-risk product-related systems.
- Inventory your AI tools/systems (including “office AI”).
- Assign owners (deployment, oversight, data/security).
- Map risks per use case (data, people impact, reputation).
- Publish an AI use policy + “never do” rules.
- Deliver role-based training (baseline + specific + reinforced if needed).
- Implement QA checklists and human review where required.
- Store internal evidence and refresh periodically.
Note: informational content, not legal advice.
FAQs
Do we need to train everyone?
Not necessarily. Focus on people who use AI (and those operating it on your behalf). Depth depends on role and risk.
Is there an official AI Act certificate?
No single EU-wide mandatory certificate. Keep a solid internal record of measures and evidence.
Is “tool instructions” enough?
Usually not. Combine training/briefing + rules + checklists + human review when needed.
How often should we refresh training?
When tools/use cases/risks change, plus a periodic refresh (quarterly or semi-annual works well).
What can Bastelia deliver?
Role-based AI literacy programs with practical workflows, policies, QA checklists and audit-friendly evidence. Email info@bastelia.com.
