A “Perfect Storm”: New EU AI Legislation Highlighting the Risks of AI Misuse

Wikipedia:perfect storm is a meteorological event aggravated by a rare combination of circumstances. The term is used by analogy to an unusually severe storm that results from a rare combination of meteorological phenomena.

One of our TOGAF students mentioned recently that their organization has banned the use of AI in their work environment. The first thought that struck me was that sticking one’s head in the sand and waiting until the storm passes is probably not a good idea. Although risks do exist, if we want to mitigate them, we must be able to assess their impact on our business and our work practices. My second thought was that a company already ensconced in attempting to transform from a bricks & clicks type of business to a digital operator model could see AI as more of a distractor at present. However, good management should be investigating how to use AI in a safe and regulated way to ensure the enhanced delivery of business value and optimized work environments and processes.

Some forms of AI has been deployed now for several years in many companies and helped them in digital transformation, but it’s the introduction of ChatGPT and LLMs (large language models) that have especially raised fears among many authorities and thought leaders about the potential misuse of AI and its impact on jobs and our knowledge workers’ futures.

Several high-profile incidents that involve AI have captured public attention and increased the demand for regulation of it. However, current guidelines and standards rolled out by individual institutions globally are considered by many as high-level and open to differing interpretations, making them difficult to put into practice. In the EU, this subject has been in public debate for some time, while work on legislation started more than 2 years ago. In fact, in April 2021, the European Commission proposed the first EU regulatory framework for AI. The focus of this legislation is on risk management, so AI systems considered for use in different applications are analyzed and classified according to the risk they pose to users. The different risk levels will mean tailored  regulation.

While this first AI legislation will have an impact worldwide, it’s not the end of story.  There is ongoing research in the field of Responsible AI, which explores numerous methods of operationalizing AI ethics. If AI is to be effectively regulated, it must not be considered from the perspective of technology alone.  Because AI is embedded in the fabric of our societies, it should be treated as a socio-technical system, one requiring multi-stakeholder involvement and the employment of continuous, value-based methods of assessment. So again, context is of critical importance and architecture skills in analyzing context can help us to assess the impact of changes related to AI.

Below you’ll find the summary of this new legislation generated by ChatGPT (blue color):

EU’s AI Act Overview:

The EU has taken a monumental step with the AI Act, the world’s pioneering legislation on Artificial Intelligence. Its objective is twofold: spur research and industrial prowess while ensuring AI remains safe, transparent, and centered on human interests.

Draft discussions extended over two years, particularly around new AI applications like ChatGPT. The primary committees ratified the draft on May 11, 2023, and a conclusive vote is scheduled for mid-June 2023.

Main Provisions of the AI Act:

  • Scope: The Act governs various AI applications, from biometric identification to employment systems and social scoring.
  • European AI Board: A supervisory board will monitor the Act’s enforcement throughout the EU.
  • Risk Classification: AI systems are grouped based on their potential risks, from unacceptable to minimal.
    • Unacceptable Risk,
    • High Risk,
    • Limited Risk,
    • Minimal Risk;

Responsibility & Liability: Upon AI systems’ launch, authorities oversee market surveillance. Providers and users must report major issues, ensuring ongoing human oversight and post-market monitoring.

Affected Stakeholders: The Act impacts a broad spectrum, from AI developers, providers, and users to general EU citizens. Its enforcement lies with national EU authorities, while the European AI Board offers guidance.

Closing Note: The AI Act symbolizes the EU’s commitment to harmonizing AI’s industrial growth with safeguarding citizens’ rights and interests.

Selected Examples in Every Risk Category:

  1. Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioral manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behavior in children.
  • Social scoring: classifying people based on behavior, socio-economic status or personal characteristics.
  • Real-time and remote biometric identification systems, such as facial recognition


Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.

  1. High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

a) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices, and lifts.

b) AI systems falling into eight specific areas that will have to be registered in an EU database:

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

 Generative AI

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
  1. Limited risk


Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content, for example deepfakes.

The above classifications of the use of AI do not consider the defense domain. However, we can imagine that it AI could help streamline operations, enhance decision-making, and increase effectiveness in as yet unexplored ways. For example, AI-powered analytics could provide strategic advantages by predicting and identifying threats. For additional details about the enhanced use of AI in the defense sector,  refer to https://www.gao.gov/blog/how-artificial-intelligence-transforming-national-security .

Note that the U.S. still lacks GenAI legislation, seemingly just relying on good behavior rules to stimulate competition. However, not everyone is planning to behave well – as things currently stand, GenAI can be used for criminal activities, given that verified IDs and purposes are not required to leverage such open source technology.