Download our Mobile Application from Google Play Store and avail discounts on all our courses.

What is the EU AI Act: A Beginner’s Guide to AI Regulation

  • Home
  • Blog
  • What is the EU AI Act: A Beginner’s Guide to AI Regulation
Image
  • September 16 2024

What is the EU AI Act: A Beginner’s Guide to AI Regulation

If you are still unaware of the immense buzz about AI in today’s world, you must be consciously staying away from the news. AI or Artificial Intelligence is the transformative technology that enables machines and computers to simulate human learning, comprehension, and action. Artificial Intelligence can perform a variety of tasks that have traditionally been handled by humans. With more and more advancements in the field every day, AI is capable of allowing machines to match and even improve upon what the human mind can achieve in a fraction of the time.

Today, we have self-driving cars, smart homes with smart gadgets, smart classrooms, social media algorithms that provide great lifestyle experiences, healthcare apps and services and more. With AI tools for nearly every sector and type of work, Artificial Intelligence has quickly become a big part of our daily lives. 

The flip side is that privacy in AI systems can be a challenge of utmost importance in the context of this fast-proliferating field. AI systems can control vast amounts of personal information and data and in the absence of data privacy, can indulge in unauthorized surveillance, loss of personal information, identity theft, algorithm bias and discrimination.  Overcoming challenges and ensuring privacy in AI systems is a burning topic today because it is ethically right to safeguard individuals’ privacy and ensure the safety of use.

The EU AI Act is one of the first major legislative efforts to regulate AI technology so that it is safe and trustworthy for users.

What is the EU AI Act?

The world’s first comprehensive AI law, the EU AI Act has been implemented to ensure better conditions for the use of this technology and its development.

Why was the EU AI Act created?

As a part of its digital strategy, in April 2021, the European Union proposed the first EU regulatory framework for AI. It states that the AI systems used in various applications need to be analyzed and classified according to the risks they pose to users. A higher risk level will require higher regulation and vice versa.

The Risk-based Approach of the EU AI Act

The EU AI Act presents a proportionate Risk-based Approach to AI regulation. This indicates a gradual scheme of requirements and obligations depending on the level of risk posed to health, safety and fundamental rights.

Key Requirements for High-Risk AI Systems

There are certain mandatory requirements that High-Risk AI Systems must comply with before the system can be put out for use in the market or before its output may be used by the EU. Conformity assessment is intended to certify that the system in question meets the following key requirements:

1. Risk Management Systems

These must be established, implemented, documented, maintained, and regularly updated. The risk management system plays a key role in identifying and analyzing foreseeable risks associated with AI. Thereafter, it works to eliminate or reduce those risks to the extent possible. Otherwise, it implements control measures in relation to those risks.

Also read: How AI is Likely to Impact Cybersecurity in 2024

2. Data Management, Data Governance, and Accuracy

High-risk AI systems that involve training models with data must use training, validation and testing data sets with appropriate data governance and management practices. Data management and data governance practices must be relevant, representative, error-free, and take into account the characteristics or elements that are particular to the specific geographical, behavioural, or functional setting within which the AI system is intended to be used.

3. Technical Documentation

This carries a detailed description of the elements of the AI system and the process of its development. It must be drawn up before the AI systems are placed on the market or put into service and must be regularly updated.

4. Record Keeping

High-risk AI systems must have logging capabilities that enable traceability of the AI system’s functioning throughout its lifecycle. This should be at a level that is appropriate to its intended purpose.

5. Transparency and Provision of Information to Users

For the benefit of users, the operation of High-Risk AI systems must be sufficiently transparent to enable users to interpret the AI system’s output and use it appropriately. Additionally, it should be accompanied by instructions for use and clearly mention any known and foreseeable circumstances that may lead to risks to health and safety or fundamental rights, human oversight measures, and the expected lifetime of the high-risk AI system. The information provided must be concise, complete, correct and clear. It must be relevant, accessible and easily understood by users.

7. Human Oversight

High-risk AI systems must be capable of being overseen by natural persons. This is done to prevent or minimize risks to health, safety or fundamental rights. The provider is expected to identify and build (wherever possible) oversight measures into the AI system. The designated individual should be skilled to fully understand the capacities and limitations of the AI system and be able to monitor its operation and output for signs of anomalies, dysfunctions and unexpected performance. If required, humans should be able to intervene and stop the system.

8. Accuracy, Robustness and Cybersecurity

High-risk AI systems must, in light of their intended purpose, be appropriately accurate, and the accuracy metrics must be declared in the accompanying instructions of use. The systems must also be appropriately robust and resilient to errors, faults or inconsistencies and resilient to third parties intending to exploit system vulnerabilities, including data poisoning and adversarial examples.

Which AI Practices are Banned by the EU AI Act

The EU AI Act bans certain AI practices that threaten citizens’ rights. These include:

  1. Biometric categorisation systems based on sensitive characteristics
     
  2. Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
     
  3. Emotion recognition in the workplace and schools
     
  4. Social scoring
     
  5. Predictive policing (when it is based solely on profiling a person or assessing their characteristics) and
     
  6. AI that manipulates human behaviour or exploits people’s vulnerabilities
     

How the EU AI Act Will Affect Businesses and AI Developers

The EU AI Act as a new regulation aims to establish better-harmonized rules to ensure that AI systems in the EU abide by this law to respect fundamental rights and ensure a high level of protection of health and safety to users while fostering and promoting investment and innovation in the field of AI at the same time.  It has the following impact on businesses and AI developers.

  1. An Impact on Innovation: It aims at striking the right balance of innovation and responsible application of AI systems.
     
  2. Regulatory Sandboxes: A regulatory sandbox is a controlled environment where business organizations can test new services, products, or business models under the supervision of a regulator. The purpose of a regulatory sandbox is to allow responsible innovation without over-regulating, keeping the protection of consumers in mind.
     
  3. Compliance Costs: The EC Impact Assessment of the AI Act calculates the Compliance Costs for manufacturers deploying AI systems which are variable according to the risk of the AI system and additional assessment conformity costs.

What are the Penalties for Non-compliance?

There are legal consequences for businesses or organizations that fail to comply with the EU AI Act, such as:

  1. Fines up to €30 million or 6% of annual turnover (whichever is higher).
     
  2. Comparison to GDPR: The EU's GDPR (General Data Protection Regulation) has had a significant impact on how companies handle personal data. One of the main differences between the AI Act and GDPR is the scope of application. The AI Act applies to providers, users, and other participants across the AI value chain (such as importers and distributors) of AI systems placed on or used in the EU market, regardless of their location.

The Global Impact of EU AI Act

  1. Setting Global Standards: The EU AI Act is expected to influence AI regulations globally, just as the GDPR did for data privacy.
     
  2. International Companies: Businesses outside the EU will also need to comply with the Act if they operate in Europe. In this sense, the EU AI Act potentially influences AI development around the world.

Conclusion

With the aim of implementing an effective set of rules for AI systems to safeguard fundamental rights including the privacy of users, the EU AI Act establishes a risk-based approach to regulation. The EU AI Act is an ambitious initiative by the European Union to create a harmonized legal framework for the development and application of artificial intelligence (AI).

It categorises AI systems based on the intensity and scope of the risks that each AI system generates.
This landmark law aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI. At the same time, it is for boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

The European Union, through the implementation of this law, underscores its role in leading the world towards trustworthy and ethical AI development and balancing the need for innovation with public safety. Its implementation also enhances the EU’s competitiveness in the strategic sector, creating a safe and trustworthy society, countering disinformation and ensuring humans are ultimately in control despite advancements in AI. It also encompasses the use of AI and digital tools to improve citizens’ access to information, including persons with disabilities.
   
In their best interests, users should remain adequately updated about AI regulations and their potential impact on technology and society.

Comments ()

Leave a reply

Your email address will not be published. Required fields are marked*

Recent Post

Copyright 2022 SecApps Learning. All Right Reserved