AI

EU Finalizes New AI Rules Targeting Deepfakes & Autonomous Weapons

by Admin

The European Union has formally adopted the Artificial Intelligence Act (AI Act), a pioneering legislation that establishes strict rules for AI use particularly targeting deepfakes and autonomous weapons. These regulations mark a major shift toward stronger governance of AI technologies across both civilian and defense domains.

Legislative Milestones & Scope

  • The AI Act was passed by the European Parliament on 13 March 2024 and approved by the EU Council on 21 May 2024, entering into force on 1 August 2024. It will be phased in over 6 to 36 months depending on risk level (Wikipedia).
  • It applies across all AI systems used professionally in the EU including high risk sectors like healthcare, recruitment, law enforcement, and border control. Military and national security applications are exempt (Wikipedia).

Banned AI: Unacceptable Risks

Under Article 5, the law bans AI systems considered to involve “unacceptable risk”, including:

  • Social scoring systems that classify individuals based on behavior or traits;
  • AI used to manipulate emotions or decisions, including subliminal techniques;
  • Real time biometric identification, such as facial recognition in public spaces, unless strictly authorized by a judge and used only for terrorism or missing person scenarios (Wikipedia, SC Media).

These measures are designed to preempt misuse and protect fundamental rights and civil liberties (ANSA.it, SC Media).

Deepfakes: Transparency & High-Risk Oversight

  • AI generated content like deepfake videos, images, or audio must now be clearly labeled as synthetic to mitigate misinformation and protect reputations (The Sun).
  • Deepfakes deployed in contexts that may harm individuals or democracy such as political campaigns or defamation can be classified as high-risk. Those systems must undergo rigorous compliance checks, including data governance, documentation, human oversight, and conformity assessments (BioID).
  • Critics warn that the law’s definitions surrounding deepfakes remain too vague, creating legal ambiguity and compliance challenges(Globedge).

Autonomous Weapons: Ensuring Human Control

  • The EU has taken a clear stance against lethal autonomous weapons systems (LAWS) that make “selection and engagement decisions” without meaningful human oversight. Such systems are barred from receiving EU funding, like from the European Defence Fund (EST – European Student Think Tank).
  • This aligns with longstanding European Parliament resolutions emphasizing human accountability in warfare and rejecting the delegation of lethal decision-making to machines (European Parliament).
  • Meanwhile, UN discussions such as a May 2025 General Assembly meeting are seeking to negotiate a global treaty regulating autonomous weapons, with a goal for legally binding rules by 2026 (Reuters).
AI

Governance, Enforcement & Global Influence

  • The law establishes a new AI Office within the European Commission and a European Artificial Intelligence Board, shaping cooperation across member states to ensure consistent enforcement (Wikipedia).
  • National authorities are tasked with overseeing AI conformity, conducting market surveillance, and coordinating with the central EU bodies (Wikipedia).
  • Companies face steep penalties: fines up to €35 million or 7% of global annual turnover for banned use, and up to €15 million for non-compliance with high risk system requirements (SC Media).
  • Because of its broad extraterritorial reach similar to the GDPR non EU providers selling AI services in Europe must adhere to these rules. The AI Act is expected to become a global benchmark and pressure other regions toward similar standards (Wikipedia, SC Media).

Why It Matters

By prioritizing deepfake labeling and outlawing covert manipulation, the EU seeks to combat misinformation and protect vulnerable people especially victims of non-consensual imagery or political smear campaigns (The Sun, BioID).

The ban on fully autonomous weapons underscores a commitment to ethical AI ethics and upholding human responsibility in lethal decisions a position reinforced at both EU and UN levels(Globedge).

The risk based model provides clear thresholds for innovation: low risk tools like chatbots are lightly regulated, high risk systems require certification, and unacceptable uses are outright prohibited (Wikipedia, SC Media).

The AI Act reflects Europe’s ambition of strategic digital autonomy, influencing both domestic regulation and global AI governance trends (Wikipedia, Wikipedia).

The EU’s finalization of the AI Act marks the world’s first comprehensive legal framework that integrates provisions addressing deepfakes and autonomous weapons. By embedding transparency mandates, risk-based oversight, and enforceable bans, the legislation sets a high bar for ethical, secure, and human-centered AI and may well redefine global expectations for AI governance.

You may also like

Leave a Comment