Publication
18.02.2025 21 min read
On 4 February 2025, the European Commission released the first set of guidelines regarding the definition of artificial intelligence ("AI") pursuant to the AI Act (the "Definition Guidelines") and the prohibited AI practices (the “Prohibited AI Guidelines”). These Guidelines were published further to the first wave of provisions of the AI Act entering into application on 2 February 2025 regarding prohibited AI practices and AI literacy, both of which require an understanding of what constitutes a (prohibited) AI or not.
  • Is my system an AI or not?

    This is the question that entities must grasp to determine whether they fall within the scope of the AI Act or not. How an AI system differs from more “traditional” systems by its non-deterministic nature, i.e., a system which – by looking at the code - is predictable in every manner in terms of input/output generally would not amount to an AI. The Definition Guidelines in this regard provide further details regarding each of the seven elements comprising the AI system definition.

    Key clarifications include that:

    • the AI may but does not have to possess adaptiveness or self-learning capabilities after deployment (beyond what it was initially trained on).
    • it does not matter whether the AI system is used for its intended purpose or not, it will remain an AI system for the purposes of the AI Act.
    • the definition of inference under the AI Act does not contradict but is not fully aligned with the same definition under the ISO/IEC 22989 standard on AI concepts and terminology. Both however aim to go beyond the simple input/output logic of traditional systems (or “basic data processing”) with the AI being capable of inferring how rather than what output to generate. Inferences not stemming from data (e.g., rule-based approaches, pattern recognition, or trial-and-error strategies) are excluded from the definition.

    The Definition Guidelines also notably list some examples of techniques which would enable the inference of the AI system, echoing back to the list proposed by the European Commission in the first draft of the AI Act but later removed following the vote on the text (including machine learning, (un)supervised learning, reinforcement learning, deep learning, logic- and knowledge-based approaches, …).

    However, the Definition Guidelines do exclude a certain number of AI systems, notably linear or logistic regression methods. Whilst such exclusion is welcome as it is widely used in market analysis and financial risk assessments, it is understood as a subset of supervised learning methods to develop AIs and may therefore create uncertainty as to the scope of such exclusion, in particular as the inferences of such AI must only contribute to its “computational performance” rather than bettering its “decision making models in an intelligent way” (in other words favouring efficiency rather than the quality of the results, which may be documented with the appropriate performance benchmarks (e.g., in TFLOPS)).

    If your system does amount to an AI, the Definition Guidelines further point to the grandfathering clause for high-risk AI systems put into the EU market or into service before 2 August 2026 (triggering the obligations only upon significant changes in their designs) and that the vast majority of (low-risk) systems will not be subject to any regulatory requirements.

  • Is my AI prohibited or not?

    If you answered the first question in the affirmative, as from 2 February 2025 the AI system which amounts to a prohibited practice cannot be put into the market or use. The explicitly prohibited AI practices, set out under Article 5(1) of the AI Act, encompass a range of activities, including but not limited to (i) manipulative and deceptive AI systems, (ii) AI systems that exploit vulnerabilities based on age, disability, or socio-economic conditions, (iii) social scoring mechanisms, (iv) massive facial image scraping systems, (v) emotion recognition technologies, and (vi) AI-driven biometric categorization for sensitive data. The 140-page long Prohibited AI Guidelines aim at bringing further clarity as to the extent of such prohibition and the exceptions included in such list.

    The Prohibited AI Guidelines notably highlight that such prohibitions do not extend to the research and development (R&D) phase of the development of the AI, citing as an example that “AI developers have the freedom to experiment and test new functionalities which might involve techniques that could be seen as manipulative […] recognising that early-stage R&D is essential for refining AI technologies and ensuring that they meet safety and ethical standards prior to their placing on the market”. To be however noted that the development of the AI will nevertheless be subject to considerations such as data protection by design and by default, as well as ethical and professional requirements. On this note, a number of providers of general-purpose AI models have put in place codes of conducts or ethics policies which are binding upon their users, and which may curb certain applications used for testing purposes.

    It also stems from the Prohibited AI Guidelines that the interplay between the AI Act and the other legislation, notably the GDPR, will be a key consideration for its enforcement. For instance, the European Commission cites dark patterns within the meaning of the Digital Services Act (“DSA”) as an example of manipulative or deceptive techniques under the AI Act, when they are likely to cause significant harm. As another example, the guidelines also strictly follow the SCHUFA decision from the CJEU holding that a decision will be assessed as automated pursuant to the GDPR even if the automation (i.e., the AI) was only used in the course of the decision-making process and a human gave the final validation.

    Another key lesson from the Prohibited AI Guidelines seems to be the will to recognise the current practices in the financial sector as not being subject to the above prohibitions. The European Commission clarifies that the social scoring prohibition does not target the scoring of legal entities, except if such score reflects an aggregate of the individuals within the entity. Furthermore, whilst the risks regarding such social score being used for assessing the credit worthiness of an individual is part of the concerns behind such prohibition, credit scoring and risk scoring in the financial and insurance sector should be deemed as a “legitimate” (i.e., non-prohibited) practice. To be nevertheless noted that such practices will likely lead to a high-risk AI system except where used for the purpose of detecting financial fraud.

    A common reasoning stemming from the Prohibited AI Guidelines, echoing the above, is the reliance on other existing regulations as sources for indications to frame the AI Act’s requirements with more precision. The guidelines for example refer to the EBA’s Guidelines on loan origination and monitoring (EBA/GL/2020/06) as a reference point for compliance and thereby determining whether the credit-rating practice is prohibited or not. Regarding the prediction of criminal offences, a practice which is prohibited both for law enforcement authorities and private entities when tasked with public missions, such as in anti-money laundering (AML) checks, the guidelines clarify that strict compliance with AML requirements (in terms of data minimisation and to the extent there are objective and verifiable markers of AML risk) “will ensure that the use of individual crime prediction AI system for anti-money laundering purposes falls outside the scope of the prohibition”.

    On the scope of the prohibition, the guidelines notably confirm that the AI system released under a free and open-source licence remains subject to the prohibition of the above practices. The guidelines also note that the prohibition refers specifically to a practice, rather than to the AI as a whole. In other words, an AI system which is capable of a prohibited practice (e.g., a chatbot being used for manipulative and deceptive techniques) is not forbidden from being placed upon the market to the extent the required guardrails are put in place to avoid it (including refusals by the AI regarding certain instructions and monitoring the compliance with such limitation). To be noted that for an AI deployed in the workplace, as another example of interplay with other legislations, the monitoring of employees’ use of such AI in their workplace may trigger the specific provisions of the Luxembourg Labour Code. Furthermore, on the basis of the article 86 of the AI Act, such employees as affected persons may lodge a complaint with the relevant market surveillance authority (the CNPD along with other sectorial bodies pursuant to the bill of law 8476 supplementing the AI Act).

    Regarding enforcement, the European Commission also confirms that where a conduct amounts to multiple violations of the AI Act, only one penalty shall be imposed for such conduct within the maximum amounts set out in the regulation (i.e., EUR 35 000 000 or up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher in case of a prohibited AI practice)

  • What about biometric surveillance?

    Biometric data is treated as a special category of personal data under the AI Act, as well as under other applicable regulations such as the GDPR. However, whereas the GDPR requires biometric data to serve an identification function, the AI Act adopts a broader interpretation that encompasses biometric data even if it does not enable unique identification.

    The AI Act explicitly prohibits two types of AI practices involving biometric data. It is prohibited to place on the market, put into service for this specific purpose or use biometric categorisation systems that “categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation”. This use could indeed be used for politically targeted messages or to discriminate against a person. The European Commission nevertheless points out that certain AI uses should not be prohibited, such as for example online marketplaces apps filters categorising bodily features to allow a consumer to preview a product on them if the person is not being individually categorised, or if the system is not deducing or inferring other sensitive characteristics.

    The AI Act also prohibits real-time remote biometric identification (“RBI”) systems in public spaces for law enforcement purposes. These systems enable automatic recognition of “physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database”. Deployers of these real-time RBI systems that capture instantaneously measurable physical or behavioural characteristics into machine-readable biometric data are therefore concerned by this prohibition.

    “real-time” in this context should be understood as processing “without any significant delay” such as before the person is likely to have left the place where the biometric data was captured.

    The Guidelines further confirm that the practices of biometric verification (or authentication) fall outside the scope of the prohibition, such as where AI is used to compare data between a reference image (the template) and data presented through a sensor (e.g., smartphone, passport, ID Card, …).

    Regarding the issue of surveillance, the above must not be confused with the prohibition regarding the inference of emotions in the area of workplace and education institutions (save for medical or safety reasons). The Prohibited AI Guidelines in this regard clarify that a bank may deploy an AI-powered camera system “to detect suspicious customers, for example to conclude that somebody is about to commit a robbery” on the condition that “no employees are being tracked and there are sufficient safeguards”. Employees in this regard should be understood broadly as any staff irrespective of their status, and including prospective employees during the recruitment process.

  • What should I do now?

    It is essential to map and determine which systems amount to AI system pursuant to the Definition Guidelines in order to then filter and cease the use, if necessary, of any prohibited AI practice pursuant to the Prohibited AI Guidelines, as such prohibitions have entered into application since 2 February 2025 (and even though no authority in Luxembourg is yet empowered to enforce these provisions).

    Further to the mapping exercise, entities must then turn to the assessment of the AI system from a risk perspective, with high-risk AIs being subject to the main regulatory requirements under the AI Act.

    In the meantime, it should be noted that provisions on AI literacy have also entered into application from 2 February 2025, thus requiring entities to set out training to ensure such literacy within their staff. In practice, and as was seen with the GDPR, there is for the moment no market consensus as to the contents of such trainings, with entities required to strike a balance between high level information and technical jargon.

  • Download the pdf

    This article was published in the February 2025 edition of Agefi Luxembourg.

Cookie notification

This functionality uses third-party cookies. Change your cookie preferences to view this content or view more information.
These cookies ensure that the website works properly. These cookies cannot be disabled.
These cookies can be placed by third parties, such as YouTube or Vimeo.
By deactivating categories, it is possible that related functionalities within the website may no longer work properly. It is always possible to change your preferences at a later time. View more information.