If you are still unaware of the immense buzz about AI in today’s world, you must be consciously staying away from the news. AI or Artificial Intelligence is the transformative technology that enables machines and computers to simulate human learning, comprehension, and action. Artificial Intelligence can perform a variety of tasks that have traditionally been handled by humans. With more and more advancements in the field every day, AI is capable of allowing machines to match and even improve upon what the human mind can achieve in a fraction of the time.
Today, we have self-driving cars, smart homes with smart gadgets, smart classrooms, social media algorithms that provide great lifestyle experiences, healthcare apps and services and more. With AI tools for nearly every sector and type of work, Artificial Intelligence has quickly become a big part of our daily lives.
The flip side is that privacy in AI systems can be a challenge of utmost importance in the context of this fast-proliferating field. AI systems can control vast amounts of personal information and data and in the absence of data privacy, can indulge in unauthorized surveillance, loss of personal information, identity theft, algorithm bias and discrimination. Overcoming challenges and ensuring privacy in AI systems is a burning topic today because it is ethically right to safeguard individuals’ privacy and ensure the safety of use.
The EU AI Act is one of the first major legislative efforts to regulate AI technology so that it is safe and trustworthy for users.
The world’s first comprehensive AI law, the EU AI Act has been implemented to ensure better conditions for the use of this technology and its development.
As a part of its digital strategy, in April 2021, the European Union proposed the first EU regulatory framework for AI. It states that the AI systems used in various applications need to be analyzed and classified according to the risks they pose to users. A higher risk level will require higher regulation and vice versa.
The EU AI Act presents a proportionate Risk-based Approach to AI regulation. This indicates a gradual scheme of requirements and obligations depending on the level of risk posed to health, safety and fundamental rights.
There are certain mandatory requirements that High-Risk AI Systems must comply with before the system can be put out for use in the market or before its output may be used by the EU. Conformity assessment is intended to certify that the system in question meets the following key requirements:
These must be established, implemented, documented, maintained, and regularly updated. The risk management system plays a key role in identifying and analyzing foreseeable risks associated with AI. Thereafter, it works to eliminate or reduce those risks to the extent possible. Otherwise, it implements control measures in relation to those risks.
Also read: How AI is Likely to Impact Cybersecurity in 2024
High-risk AI systems that involve training models with data must use training, validation and testing data sets with appropriate data governance and management practices. Data management and data governance practices must be relevant, representative, error-free, and take into account the characteristics or elements that are particular to the specific geographical, behavioural, or functional setting within which the AI system is intended to be used.
This carries a detailed description of the elements of the AI system and the process of its development. It must be drawn up before the AI systems are placed on the market or put into service and must be regularly updated.
High-risk AI systems must have logging capabilities that enable traceability of the AI system’s functioning throughout its lifecycle. This should be at a level that is appropriate to its intended purpose.
For the benefit of users, the operation of High-Risk AI systems must be sufficiently transparent to enable users to interpret the AI system’s output and use it appropriately. Additionally, it should be accompanied by instructions for use and clearly mention any known and foreseeable circumstances that may lead to risks to health and safety or fundamental rights, human oversight measures, and the expected lifetime of the high-risk AI system. The information provided must be concise, complete, correct and clear. It must be relevant, accessible and easily understood by users.
High-risk AI systems must be capable of being overseen by natural persons. This is done to prevent or minimize risks to health, safety or fundamental rights. The provider is expected to identify and build (wherever possible) oversight measures into the AI system. The designated individual should be skilled to fully understand the capacities and limitations of the AI system and be able to monitor its operation and output for signs of anomalies, dysfunctions and unexpected performance. If required, humans should be able to intervene and stop the system.
High-risk AI systems must, in light of their intended purpose, be appropriately accurate, and the accuracy metrics must be declared in the accompanying instructions of use. The systems must also be appropriately robust and resilient to errors, faults or inconsistencies and resilient to third parties intending to exploit system vulnerabilities, including data poisoning and adversarial examples.
The EU AI Act bans certain AI practices that threaten citizens’ rights. These include:
The EU AI Act as a new regulation aims to establish better-harmonized rules to ensure that AI systems in the EU abide by this law to respect fundamental rights and ensure a high level of protection of health and safety to users while fostering and promoting investment and innovation in the field of AI at the same time. It has the following impact on businesses and AI developers.
There are legal consequences for businesses or organizations that fail to comply with the EU AI Act, such as:
With the aim of implementing an effective set of rules for AI systems to safeguard fundamental rights including the privacy of users, the EU AI Act establishes a risk-based approach to regulation. The EU AI Act is an ambitious initiative by the European Union to create a harmonized legal framework for the development and application of artificial intelligence (AI).
It categorises AI systems based on the intensity and scope of the risks that each AI system generates.
This landmark law aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI. At the same time, it is for boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.
The European Union, through the implementation of this law, underscores its role in leading the world towards trustworthy and ethical AI development and balancing the need for innovation with public safety. Its implementation also enhances the EU’s competitiveness in the strategic sector, creating a safe and trustworthy society, countering disinformation and ensuring humans are ultimately in control despite advancements in AI. It also encompasses the use of AI and digital tools to improve citizens’ access to information, including persons with disabilities.
In their best interests, users should remain adequately updated about AI regulations and their potential impact on technology and society.
Your email address will not be published. Required fields are marked*
Copyright 2022 SecApps Learning. All Right Reserved
Comments ()