>
fr / en
Logo 99 Logo 99 header

07

Aug
2024

General articles

International and European law

IT and communication law

07/ Aug
2024

General articles

International and European law — IT and communication law

Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act), applicable to providers and deployers of non-EU countries

Summary

Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence or “Artificial Intelligence Act” (hereinafter “AI Act”), adopted by the Council on 21 May 2024, was published in the Official Journal of the European Union (OJEU) on 12 July 2024.

The purpose of the AI Act is "to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation." (article 1 (1.) AI Act).

A forerunner at global level, the AI Act establishes harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the European Union (EU). These rules are more or less strict depending on the level of risk generated by the AI system: prohibition of certain AI practices, specific requirements for high-risk AI systems and obligations for the operators concerned, transparency rules for lower-risk AI systems (Article 1 (2.) AI Act), i.a.

Because of its extraterritorial scope, the AI Act is likely to apply to providers
(that develop an AI system or a general-purpose AI model or that have an AI system or a general-purpose AI model developed and place it on the market or put the AI system into service under their own name or trademark, whether for payment or free of charge) and deployers (using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity) from non-EU countries.

The timetable for implementation of the AI Act is progressive. The AI Act will apply from 2 August 2026, except for the following provisions:

  • application of the general provisions (Chapter I) and the rules relating to artificial intelligence systems with an unacceptable risk (Chapter II) from 2 February 2025;
  • application of the provisions relating to notifying authorities and notified bodies (Chapter III, Section 4), the rules relating to general-purpose AI models (Chapter V, with the exception of Article 101 relating to fines applicable to providers of general-purpose AI models), the provisions relating to governance (Chapter VII) and confidentiality (Article 78) from 2 August 2025;
  • application of the rules relating to high-risk AI systems (Article 6(1) and corresponding obligations) from 2 August 2027.

Other texts will supplement the IA Act (delegated and implementing acts, codes of conduct and guidelines, harmonised standards, codes of good practice, national texts).

The IA Act is part of the EU's digital "legislative package", together with Regulation (EU) 2022/868 of 30 May 2022 on European data governance (DGA), Regulation (EU) 2022/1925 of 14 September 2022 on contestable and fair markets in the digital sector (DMA) and Regulation (EU) 2022/2065 of 19 October 2022 on digital services (DSA).

* * *

NOTE: On 16 July 2024, thethe European Data Protection Board (EDPB) adopted the Statement 3/2024 on data protection authorities’ role in the Artificial Intelligence Act framework. EU data protection law is fully applicable to the processing of personal data involved in the lifecycle of AI systems (Recital. 9 and 10 and Art. 2, paragraph 7 AI Act).

Contents

¤ The scope of the AI Act for non-EU countries such as Monaco

  • Extraterritorial scope
  • Reference source

¤ Content of the AI Act (key points)

  • Risk-based approach
  • Unacceptable risk: prohibited AI practices (Chapter II AI Act)
  • High risk: requirements for high-risk AI systems and obligations for the operators concerned (Chapter III AI Act)
  • Lower risk: transparency obligations for the operators concerned (Chapter IV AI Act)
  • Governance (Chapter VII AI Act)
  • Sanctions (Chapter XII AI Act)

* * *

¤ The scope of the AI Act for non-EU countries such as Monaco

→ The AI Act is of great interest to countries outside the EU, such as Monaco, if only for its extraterritorial scope. It is aimed at providers (developers) and deployers (users in the course of a business activity) established or located in a third country who place AI systems/models on the market/service in the EU, or where the output produced by the AI system is used in the EU (Article 2 (1.) AI Act).

AI systems may therefore fall within the scope of the AI Act even though they are not placed on the market, commissioned or used in the EU, in order to ensure effective protection of individuals located in the EU whose data, lawfully collected and transferred from the EU, may be processed by an external operator from a third country. For example, where an operator established in the EU entrusts an external operator established in a third country with the task of performing certain services relating to an activity to be carried out by an AI system used in that third country which would be considered high-risk, the output of which produced by that system is intended to be used in the EU (Recital 22 AI Act).

Providers (that develop an AI system or a general-purpose AI model or that have an AI system or a general-purpose AI model developed and place it on the market or put the AI system into service under their own name or trademark, whether for payment or free of charge) and deployers (using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity) concerned may be natural or legal persons, including companies, associations and public authorities.

→ Secondly, legislators in third countries could find inspiration in it. From the Principality's point of view, "Artificial intelligence will be a natural extension of the Princely Government's policy for ethical and responsible digital technology", and the AI Act "could very well inform future legislation, including Monegasque legislation". Indeed, "legislating in a manner consistent with the European ecosystem (...) means being aligned with a continent for which respect for privacy and the protection of users is a priority". What's more, "an agreement on the subject of AI will have economic potential for the Principality" because of its "economic interests (...) closely linked to Europe". (Frédéric Genta, Délégué interministériel à la transition numérique et à l’attractivité, « Il va falloir penser un monde hybride, mêlant humain et machine, au service de l’humain », Dossier Intelligence artificielle, Monaco Hebdo, 14 May 2024).

It may also be noted that the explanatory memorandum to Bill no. 1087 on the use of video-protection and video-surveillance of places accessible to the public for the detection, search and identification of persons wanted or reported by means of remote biometric identification systems, refers to the provisions of the AI Act concerning the use of AI systems for law enforcement purposes for remote biometric identification in spaces accessible to the public.

* * *

¤ Content of the AI Act (key points)

The AI Act is voluminous, comprising 180 recitals, 113 articles and 13 annexes. Without beeing exhaustive, here are some of the key points:

Risk-based approach

While the use of AI can be beneficial for businesses (competitive advantages), society and the environment (eg, health, agriculture, food safety, education and training, media, sport, culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, environmental monitoring, preservation and restoration of biodiversity and ecosystems, climate change mitigation and adaptation), it can also be a source of harm (material or immaterial, including physical, psychological, societal or economic) depending on the circumstances surrounding its application and use and the level of technological development (recitals 4 and 5).

The AI Act adopts a risk-based approach (recital 26). The greater the risk of damage to the fundamental rights of individuals or to society (e.g. impact on data protection, errors, cognitive bias, discrimination, etc.) generated by the AI system, the stricter the rules laid down by the AI Act:

  • prohibition of certain unacceptable AI practices,
  • requirements for high-risk AI systems, and obligations for the operators concerned (the most significant aspect of the AI Act),
  • transparency requirements for lower-risk AI systems,
  • free use of AI with minimal or no risk, with encouragement to present codes of conduct. (e.g. AI-enabled video games, spam filters, etc., the vast majority of AI systems currently in use in the EU).

Unacceptable risk: prohibited AI practices (Chapter II AI Act)

→ The following placing on the market, putting into service or use of an AI system shall be prohibited

  • that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;
  • that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
  • for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to the detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected, or/and that is unjustified or disproportionate to their social behaviour or its gravity;

→ The following placing on the market, the putting into service for this specific purpose, or the use of an AI system shall also be prohibited:

  • for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;
  • that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
  • that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;

→ Is finally prohibited:

  • the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement. Exception: unless and in so far as such use is strictly necessary for: (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least 4 years. For Monaco, see Bill no. 1087 on the use of remote biometric identification to safeguard national security.

High risk: requirements for high-risk AI systems and obligations for the operators concerned (Chapter III AI Act)

→ AI systems identified as high risk are listed in the following areas (Annex III AI Act):

  • Biometrics (remote biometric identification systems, AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics, or for emotion recognition);
  • Critical infrastructure (AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity);
  • Education and vocational training (AI systems intended to be used to determine access or admission or to assign natural persons to training institutions at all levels; or to evaluate learning outcomes; or for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access; or for monitoring and detecting prohibited behaviour of
    students during tests);
  • Employment, workers management and access to self-employment (AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates; or to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships);
  • Access to and enjoyment of essential private services and essential public services and benefits with the exception of AI systems used for the purpose of detecting financial fraud (AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;to evaluate the creditworthiness of natural persons or establish their credit score; or for risk assessment and pricing in relation to natural persons in the case of life and health insurance; to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems);
  • Law enforcement (AI systems intended to be used in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences; or as polygraphs or similar tools; or to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences; for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups; or for the profiling of natural persons in the course of the detection, investigation or prosecution of criminal offences);
  • Migration, asylum and border control management (AI systems intended to be used s as polygraphs or similar tools; or s to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State; or for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence; or for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents);
  • Administration of justice and democratic processes (AI systems intended to be used to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution; or for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda).

→ The obligations of providers and deployers of high-risk AI systems (Chapter III AI Act):

  • Compliance with requirements applicable to high-risk AI systems (Section 2): risk management system; training, validation and test datasets subject to data governance and management practices; technical documentation; automatic event recording (logs); transparency and provision of information to deployers (user manual); human control during use; accuracy, robustness and cybersecurity;
  • Obligations of providers of high-risk AI systems (Section 3): quality management system ensuring compliance with the AI Act, documented (written policies, procedures and instructions); retention of documents for 10 years after the high-risk AI system is placed on the market/in service, kept available for the competent national authorities; maintenance of automatically generated logs kept for at least 6 months; if necessary, corrective measures to bring the high-risk AI system into compliance, withdraw it, deactivate it or recall it, informing distributors and, where applicable, deployers, authorised representatives and importers; cooperation with the competent authorities; for providers established in third countries, appointment of an authorised representative established in the EU; EU declaration of compliance; registration in the EU database.
  • Obligations of deployers of high-risk AI systems (Section 3): technical and organisational measures to ensure that high-risk AI systems are used in accordance with the accompanying user manuals; human control; where appropriate, control over input data, which must be relevant and sufficiently representative with regard to the purpose of the high-risk AI system; monitoring the operation of the high-risk AI system on the basis of the user manual and, where appropriate, in the event of a risk or incident, informing (as the case may be, the supplier, distributor, importer, market surveillance authority); where appropriate, keeping logs generated automatically by the high-risk AI system; specific rules for deployers that are financial institutions, or employers, or public authorities or EU institutions, bodies, offices or agencies, and for remote biometric identification.

Lower risk: transparency obligations for the operators concerned (Chapter IV AI Act)

→ Transparency obligations of providers and deployers depending on the AI system concerned:

  • Providers of AI systems intended to interact directly with natural persons: design and development in such a way that the natural persons concerned are informed that they are interacting with an AI system; with exception;
  • Providers of general-purpose AI systems, generating synthetic audio, image, video or text content (‘generative AI’ such as ChatGPT, Dall-E, Jukebox, LLaMa, Ernie, Bard, Runway, Synthesia.io, D-ID, etc). marking the output of AI systems in a machine-readable format and detectable as having been artificially generated or manipulated by an AI; effective, interoperable, robust and reliable technical solutions (relevant technical standards); with exception;
    • Specific rules are also laid down for general-purpose AI models which may or may not present systemic risks (e.g. production of technical documentation, summary of algorithm training datasets, information and documentation to be provided to downstream providers who intend to integrate the model into their own AI system, so that they understand its capabilities and limitations and are able to comply with them, establishment of a policy on copyright and related rights, etc.);
  • Deployers of an emotion recognition system or a biometric categorisation system: informing natural persons exposed to it about the functioning of the system; processing personal data in accordance with the applicable personal data protection legislation; with exception;
  • Deployers of an AI system that generates or manipulates images or audio or video content constituting a deep fake: indication that the content has been generated or manipulated by an AI (where the content forms part of a work or programme that is manifestly artistic, creative, satirical, fictitious or the like: disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hinder the display or enjoyment of the work);
  • Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest: indication that the text has been artificially generated or manipulated; with exceptions (in particular when the content generated by AI has undergone a process of human review or editorial control and when a natural or legal person holds editorial responsibility for the publication of the content).

→ Information must be provided to the natural persons concerned in a clear and recognisable manner at the latest at the time of the first interaction or exposure, and must comply with the applicable accessibility requirements.

Governance (Chapter VII AI Act)

Governance at EU level

  • AI Office within the Commission through which it develops EU expertise and capabilities in the field of AI. The Commission has exclusive powers to monitor and control compliance with the obligations incumbent on providers of general-purpose AI models.
  • European Artificial Intelligence Board, which advises and assists the Commission and the Member States in order to facilitate the consistent and effective application of the AI Act.
  • Advisory forum, which represents commercial and non-commercial interests (industry, start-ups, SMEs, civil society and academia);
  • Scientific panel of independent experts intended to support the implementation and enforcement of of the AI Act.

National competent authorities

  • at least one notifying authority responsible for setting up and carrying out the procedures necessary for the assessment, designation and notification of conformity assessment bodies and for monitoring them;
  • at least one market surveillance authority carrying out the task and taking the measures provided for in Regulation (EU) 2019/1020 of 20 June 2019 on market surveillance and cpmliance of products. Any natural or legal person who has reason to consider that there has been a breach of the provisions of the AI Act may lodge complaints with the relevant market surveillance authority.

Penalties (Chapter XII AI Act)

Commission

Fines (effective, proportionate and dissuasive) applicable to providers of general-purpose AI models: the Commission may impose fines not exceeding 3% of their total worldwide annual turnover in the preceding financial year, or €15,000,000, whichever is the higher, where it finds that the provider, either deliberately or by negligence:

  • breached the relevant provisions of the IA Act,
  • failed to comply with a request for documents or information, or provided inaccurate, incomplete or misleading information,
  • failed to comply with a requested action,
  • failed to provide access to the general purpose AI model or the systemic risk general purpose AI model for the purpose of conducting an assessment.

→ The factors taken into account in setting the amount of the fine or periodic penalty payment:

  • the nature, gravity and duration of the infringement, while taking due account of the principles of proportionality and adequacy.
  • the commitments made, including in the relevant codes of practice.

The Court of Justice of the European Union (CJEU) has unlimited jurisdiction to rule on appeals against decisions by which the Commission has set a fine. It may cancel, reduce or increase the fine imposed.

Member States

→ Member States shall determine the regime of sanctions (effective, proportionate and dissuasive) and other enforcement measures, which may also include warnings and non-monetary measures, applicable to AI Act violations committed by operators, and shall take all necessary measures to ensure the correct and effective implementation of such sanctions (guidelines published by the Commission). These sanctions take into account the interests of SMEs, including start-ups, and their economic viability.

→ The amount of administrative fines more or less heavy depending on the type of infringement:

  • Non-compliance with the prohibition on AI practices: administrative fines of up to 35 000 000 EUR or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Non-compliance with other (listed) provisions of the IA Act: administrative fines of up to 15 000 000 EUR or, if the offender is an undertaking, up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:
  • Providing incorrect, incomplete or misleading information to the notified bodies or competent national authorities in reply to a request: administrative fines of up to 7 500 000 EUR or, if the offender is an undertaking, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.

→ Factors taken into consideration when deciding the amount of the administrative fine:

  • the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them;
  • whether administrative fines have already been applied by other market surveillance authorities to the same operator for the same infringement;
  • whether administrative fines have already been applied by other authorities to the same operator for infringements of other Union or national law, when such infringements result from the same activity or omission constituting a relevant infringement of the AI Act;
  • the size, the annual turnover and market share of the operator committing the infringement;
  • any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement;
  • the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate its possible adverse effects.
  • the degree of responsibility of the operator taking into account the technical and organisational measures implemented by it;
  • the manner in which the infringement became known to the national competent authorities, in particular whether, and if so to what extent, the operator notified the infringement;
  • the intentional or negligent character of the infringement;
  • any action taken by the operator to mitigate the harm suffered by the affected persons.

* * *

With the AI Act, the EU intends not only to position itself as the reference player for ethical and regulated AI, but also to encourage innovation and the market entry of AI systems developed by SMEs and startups within the EU, via AI "regulatory sandboxes" at national level creating a controlled environment for experimentation and testing at the development and pre-commercialisation stage.

* * *

Other publications