Menu
Alien Papers
About
Contact
Content
Verticals
Science
Health
Art
Other verticals
Whitepaper

AI Regulation Recap

AI Regulation Recap
AI Regulation Recap
Scroll for more
Scroll for more

On May 6th, amidst the global rush towards embracing artificial intelligence responsibly, the first official meeting of the AI Pact Initiative took place, signifying a proactive step towards compliance with the forthcoming European AI Act.

Alien, along with 278 other organizations from around the world, participated in this inaugural session that aims to establish a foundation of trust in AI technologies among Europeans by ensuring that AI systems are safe and their operations transparent.

Scheduled to possibly come into force before the end of June 2024, the AI Act is recognized as the world’s first comprehensive legislative framework for artificial intelligence. It will be implemented gradually following its official adoption. Its provisions will be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will take effect 6 months after coming into force; codes of practice (nine months after); governance and GPAI provisions (after 12 months); and obligations for high-risk systems will be enforced 36 months after the Act’s effective date.

What is the AI Pact?

The AI Pact was developed following the proposal of the AI Act in 2021 as a preparatory framework for the AI Act’s implementation. This initiative is aimed at helping organizations from various sectors, including AI companies and public entities, understand and comply with the forthcoming regulations through the sharing of best practices.

Anticipating the complexities involved in complying with such legislation, the Commission launched a call for interest in November 2023, receiving an enthusiastic response from over 550 organizations across various sectors and countries to join the AI Pact. In the EU, Germany has the highest number of participating organizations, followed by France and Belgium. Internationally, the US has a comparable level of participation to the leading EU countries, with other nations like Switzerland and the United Kingdom also showing strong involvement.

A recent survey highlighted that the primary motivation for joining the pact includes learning about the law’s applicability, understanding compliance requirements, and engaging in a broader network of AI practitioners and policymakers. The familiarity survey indicates a moderate to high understanding of the AI Act among most organizations.

In order to facilitate the implementation of the AI Act, the AI Pact follows a dual approach. On the one hand, pillar I concentrates on gathering AI stakeholders and promoting insight exchange through workshops, aimed at clarifying the Act’s applicability and assisting organizations with compliance strategies. Pillar II, on the other hand, aims to accelerate the Act’s application by developing templates and engaging directly with leading AI developers.

About the AI act

The AI Act, proposed by the European Commission on April 21, 2021, is set to be the first comprehensive legislative framework globally to govern artificial intelligence. Following its proposal, a final political agreement was reached in March 2024, which included significant amendments focusing on law enforcement and General Purpose AI (GPAI) models.

The AI Act introduces a risk-based approach to regulation, categorizing AI systems according to the level of risk they present. It delineates four categories of risk:

    • Unacceptable risks, where activities such as social scoring and untargeted scraping are prohibited due to their potential to harm societal values.
    • High risks, requiring AI applications in sensitive areas like recruitment and medical devices to comply with strict regulations and undergo an ex-ante conformity assessment.
    • Transparency risks, where systems capable of misleading humans, like impersonation tools or deep fakes, are permitted but subject to stringent transparency obligations.
    • Minimal or no risks, which represent the majority of AI systems (80 to 85%) and face minimal regulatory constraints, allowing adherence to voluntary codes of conduct.

The Act explicitly prohibits unethical practices including manipulative AI techniques and unconsented real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, –with narrow exceptions and with prior authorisation by a judicial or independent administrative authority–.

For AI systems categorized as high-risk due to their integration in safety-critical products, specific criteria must be met before deployment. These systems are permitted subject to compliance with AI requirements and ex-ante conformity assessment.

Besides, the AI Act sets forth specific obligations for both providers and deployers of high-risk AI systems. Providers are required to implement comprehensive risk management systems, ensure data quality, maintain documentation for traceability, and conduct conformity assessments prior to placing on the market.. Deployers, on the other hand, must operate these AI systems in accordance with regulatory instructions and ensure robust human oversight to manage operational risks effectively.

EU: AI governance strategy

The European Commission has adopted a holistic governance approach towards AI regulation. In addition to the AI Pact, these efforts include the establishment of the AI Office and associated governance bodies such as the AI Board, the Scientific Panel and the Advisory Forum, aimed at ensuring a comprehensive and effective framework.

To facilitate compliance with the AI Act, the Commission is coordinating the development of codes of practice for General Purpose AI (GPAI), with completion expected within nine months of the Act’s enforcement. Alongside this, the Commission is drafting guidelines on AI system definitions and prohibitions.

Conclusion

Through ongoing dialogues, workshops, and collaborative efforts, the AI Pact, alongside other EU initiatives, are setting a precedent for how global regulations on emerging technologies should be approached, emphasizing a collective, transparent, and inclusive approach towards AI governance.

4 min read
by Primavera De Filippi
Share this post on :
Copy Link
X
Linkedin
Newsletter subscription
Related Papers
Let’s build what’s next, together.
Let’s build what’s next, together.
Let’s build what’s next, together.
Close