This obligation, which takes effect on 2 February 2025, applies to all AI systems regardless of the risk level they present. This requirement is not merely a regulatory checkbox, but an essential part of effective and responsible AI governance.
-
#1. Providers and deployers of AI systems must comply with the regulatory requirements for AI literacy
What is AI literacy? The definition in article 3(56) of the AI Act refers to “skills, knowledge and understanding that allow providers, deployers and affected persons - taking into account their respective rights and obligations in the context of the AI Act - to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.” In simpler terms, AI literacy equips individuals with the know-how to responsibly evaluate, deploy, and use AI technologies. It’s about understanding not just how AI works, but also the ethical, legal, and societal implications of AI-driven decisions. With the widespread uptake of generative AI, most organisations will likely face the challenge of developing and achieving the AI literacy requirement, while also being demonstrably accountable for the effort and outcomes related to this process.
-
#2. AI literacy goes beyond compliance and technical aspects
AI literacy involves understanding how to use AI effectively, strategically, and responsibly. It encompasses technical aspects like machine learning and data analysis, as well as ethical issues such as bias in algorithms, data privacy, and AI’s societal impact. A key factor is tailoring the AI literacy programme to different roles within the organisation, considering the target audience's technical knowledge, experience and training. Guidance from the Autoriteit Persoonsgegevens, the Dutch supervisory authority, states that the “level of AI literacy of each employee must be in line with the context in which the AI systems are used and how (groups of) people may be affected by the systems”. It also provides examples in this respect. For instance, employees performing a HR function need to understand that an AI system may contain biases or exclude essential information that could result in an applicant being selected or rejected for incorrect reasons. From marketing, HR and finance to management, everyone plays a role in the responsible application of AI.
-
#3. An AI literacy plan must promote continuous learning and adaptation
While there are no direct fines for not ensuring AI literacy, non-compliance with Article 4 could influence the severity of penalties for other violations of the AI Act. For instance, providing incorrect or incomplete information to notified bodies or national authorities could lead to penalties up to EUR 7,500,000 or 1% of the organisation’s total worldwide annual turnover from the previous financial year, whichever is higher. It is important to keep this in mind when assessing and documenting your organisation's AI literacy level. Penalties become applicable on 2 August this year so it now the time to act.
Moreover, AI literacy programmes should aim for trustworthy use of AI, not just avoiding fines. This requires ongoing effort, benefiting compliance and competitive edge alike. It is no surprise that 80% of C-suite executives believe AI will kickstart a culture shift where teams become more innovative. Keeping up with AI advancements is challenging, as is achieving AI literacy. Organisations must identify their AI literacy needs, ensure they maintain adequate levels tailored to their specific context and demonstrate its importance throughout all levels of the organisation. This assessment should be systematically documented, periodically reviewed, and include a roadmap for advancing the programme. An AI literacy plan must also promote continuous learning and support this process with appropriate resources, timely information, and updated assumptions.
-
EU AI Act summary (pdf)
The EU AI Act provides a framework to ensure that AI systems are safe, transparent, and respect fundamental rights. Our two-page summary outlines its scope, definitions, risk classifications of AI systems, the enforcement mechanism, and penalties. It also includes an overview of important dates and steps for effective governance. You can read or download the summary and timeline here.
Contact
At NautaDutilh, we closely monitor these developments. Should you have any questions regarding the EU AI Act and its implications for your current and future business operations, please do not hesitate to contact us.