The European Union’s proposed artificial intelligence (AI) policy, which was unveiled on April 21, is a direct challenge to Silicon Valley’s widely held belief that the law should stay out of burgeoning technologies. The plan lays up a complex regulatory framework that prohibits certain AI applications, strongly controls high-risk applications, and controls less dangerous AI systems lightly.
What is an “AI System”?
Artificial Intelligence(AI) systems have been defined in Article 3.1, AI Regulation as software that is created using several of the methods and practices listed in Annex I, AI Regulation, and can generate outputs such as content, predictions, recommendations, or decisions impacting the surroundings it interacts with for a given set of human-defined objectives.
NEED TO REGULATE AI
The widespread usage of AI in our daily activities, sometimes in an unnoticed manner, has resulted in unprecedented legal concerns in novel concepts and circumstances. Society at large need to be safeguarded from technologies that may adversely impact them are the reason for regulation.
The European Commission listed several AI systems which they feel contradict EU principles and basic rights, and as such are officially forbidden. Locations open to the general public for law enforcement purposes, unless they are required for a targeted criminal search or the prevention of serious dangers. There may be more concerns expressed by the European Parliament and the Commission in 2020 concerning a face recognition programme created by Clearview AI and Privacy to assist the US law enforcement officials in locating photographs of unidentified individuals posted online. This technology has now been classified as AI, and so represents an intolerable danger. As a result, it will be outlawed.
HIGH -RISK AI SYSTEMS
AI Regulation Articles 6 and 7 provide criteria to decide if a system should be classified as having a high risk. Articles 6 and 7 describe AI systems employed as safety components in goods (or which are themselves a product). This covers items already covered by EU product safety regulation under Annex II of the AI Regulation. Article 7 concerns stand-alone AI systems that might impair the basic rights of natural beings. Employment, education, enforcement agencies, migration, refugee and border security, justice system, and democratic systems are among the six areas included in Annex III.
There are the following requirements for a high-risk AI system in general.
- Transparency: High-risk AI systems must be designed and developed in such a manner that its output is transparent enough for users to understand and appropriately apply it.
- Human supervision: To limit threats to human health, safety, and basic rights, high-risk AI systems must be designed and developed with human oversight in mind.
- Risk management system: A risk management system must be created and maintained throughout the system’s lifecycle in order to determine and evaluate hazards and execute suitable risk management strategies.
- Data Policies: Data sets utilised in training, validation, and testing must be meaningful, relevant, reliable, and complete, as well as adhere to appropriate data planning and compliance rules.
- Documentation in technical terms: Before an AI system may be put on the market, it must have complete technical documentation demonstrating conformity with the AI Regulation, which must be maintained throughout the system’s existence. in addition
- Security: A high degree of precision, robustness, and safety must be maintained throughout the lifetime of the high-risk AI system.
Furthermore, the AI Regulation requires the provider of a high-risk AI system to guarantee that the system meets the requirements for high-risk AI systems and is submitted to the relevant conformity assessment technique (before being placed on the market/in service). They also have to take immediate corrective action to resolve any suspected non-conformity and inform the appropriate authorities. They must create a quality assurance system that encompasses compliance with regulations as well as design, testing, validation, data management, and recordkeeping processes, and they must register high-risk AI systems in the AI database before placing them on the market. Furthermore, establish and maintain a post-market monitoring system that gathers and analyses data on the performance of high-risk AI systems over the course of their lives. This includes the requirement to notify any major incident or failure of the AI system, which would be a breach of EU rules geared at safeguarding fundamental rights.
The users of high-risk AI systems are subject to more restricted but significant duties under the AI Regulation, which includes utilising the systems following the provider’s instructions and putting in place all technical and organisational safeguards that the provider specifies to mitigate the risks of utilising a high-risk AI system; ensure that all input data is related to the intended purpose; monitor system performance and alert the provider of major occurrences or malfunctions, and preserve logs created automatically by the high-risk AI system that is within the user’s control.
ALL OTHER AI SYSTEMS
Other AI systems that aren’t classified as restricted or high-risk AI systems aren’t subject to any restrictions. The Commission has indicated that providers of “non-high-risk” AI systems should be encouraged to adopt codes of conduct designed to stimulate the voluntary implementation of the required rules applicable to high-risk AI systems to assist the development of “trustworthy AI.” Transparency restrictions are enforced on specific AI systems that offer a low risk. For example, unless it is “clear from the conditions and context of usage,” AI systems meant to communicate with real humans must be built and built in such a manner that users are told they are dealing with an AI system. This transparency requirement would apply, for example, to the usage of chatbots. All other “low-risk” AI systems may be created and deployed following current regulations, with no new legal requirements.
LIABILITY ON NON-COMPLIANCE WITH THE REGULATIONS
Non-compliance with certain sensitive regulations (e.g., prohibited AI practises and high-quality data sets concerning high-risk AI systems) may result in heavy sanctions of up to EUR 30 million or, if the defendant is a corporation, up to 6% of its global total annual turnover for the previous fiscal year, whichever one is higher, according to the AI Regulation Article 71 (1). Failure to comply with any other AI-related standards might result in fines of up to EUR 20 million or, if the defendant is a corporation, up to 4% of all its total global annual turnover for the previous fiscal year, whichever one is larger as per Article 71 (4).
ESTABLISHMENT OF EUROPEAN ARTIFICIAL INTELLIGENCE BOARD
The proposed regulations would be implemented at the Union level through the formation of a European Artificial Intelligence Board (the “Board”), which will be made up of members from the Member Countries and the Commission. EU AI Board, The Board will (i) help the Commission and national supervisory authorities work together more effectively, and (ii) give advice and support to the Commission. It will also gather and disseminate best practices among member nations.
NEED OF GLOBAL LEGAL FRAMEWORK ON ARTIFICIAL INTELLIGENCE
AI is becoming one of the most important concerns in the world, and its influence is growing across the world. Concerns and expectations have been increased at practically all levels of human behaviour. It raises several issues that are either historically novel or novel is based on the intensity with which they are raised: the ability to interact between humans and AI, as well as the legal, ethical, and sociocultural repercussions; the magnitude of increased productivity and capital accumulation, as well as the effect on the working population as a factor in the financial circuit and—probably—above all, the possible development of entirely autonomous AI entities, are the key points of contention.
The fusion of AI with cyberspace can develop beings with intelligent personhood capabilities but no legal links to a physical location and consequently to states. As a result, entities with actual personhood will be outside the grasp of state legal power. This is why another legal system is required to regulate their future legal personhood. As a result, a worldwide framework governed by the international community is required to control AI technology in global commons such as the high seas, outer space, Antarctica, and cyberspace.
CONCLUSION
Humans are entering an era in which we will increasingly rely on autonomous and learning machines to complete a wide range of jobs. At some point, the legal system will have to determine what to do when such machines cause harm. The AI Regulation of EU is undoubtedly a thorough and bold step by the Commission to pave the way in one of the most constantly evolving fields of technology since the Internet’s inception. The EU has paved the road for developed and developing countries to step forward and pass regulations to limit future as well as present risks caused by AI. For regulatory purposes, governments throughout the world should adopt AI legislation. This would support not just the industry’s safe and orderly development, but also in allowing technology and enterprises to flourish at their own pace.
Author(s) Name: Aayush Pandey
He is a 2nd-year Law student at Gujarat National Law University with a keen interest in International Law.