White Paper: Monitoring and Observability for Data Platform
About In this White Paper, we described a monitoring and observing data platform in case of continuously working processes. What you will find there…
Read moreIt's coming up to a year since the European Commission published its proposal for the Artificial Intelligence Act (the AI Act/AI Regulation).
The public consultation received over 300 responses including from industry stakeholders, NGOs, academics and others indicating significant interest in the proposed AI Regulation.
Let me provide you with a short overview of the proposed EU AI regulation and current developments during the legislative process.
Please note that the AI Act draft is still under legislative review and the following proposals may change.
Artificial intelligence system is software that is developed with one or more of the following techniques and approaches:
machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
logic and knowledge-based approaches, including knowledge representation, intuitive (logical) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; or
statistical approaches, Bayesian estimation, search and optimization methods.
It can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
The proposed definition is broad and seems to be technology neutral and future-proof. In addition, the EU can then modify and update the definition by adding/changing the techniques and approaches listed.
One of the latest compromise texts proposes to include an explicit reference indicating that any such system should be capable of determining how to achieve a given set of human defined objectives by learning, reasoning or modelling.
The AI Act will apply to:
The AI Act proposes a risk-based approach and divides AI systems into three main categories:
Subliminal, manipulative, or exploitative systems that cause harm.
Real-time, remote biometric identification systems used in public spaces for law enforcement.
All forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.
Biometric identification and categorisation of natural persons – i.e AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons.
Management and operation of critical infrastructure – i.a. AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
Education and vocational training - i.a. AI systems intended to be used for:
Employment, employee management and access to self-employment – i.a. AI systems intended to be used for recruitment or the selection of natural persons, notably for advertising vacancies, screening or filtering applications and evaluating candidates in the course of interviews or tests.
Access to and enjoyment of essential private services and public services and benefits – i.a. AI systems intended to be used to evaluate the creditworthiness of natural persons or to establish their credit score or to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including firefighters and emergency medical aid.
Law enforcement – i.e. AI systems intended to be used by law enforcement authorities for doing individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending, or the risk for potential victims of criminal offences.
Migration, asylum and border control management - i.a. AI systems intended to be used by competent public authorities to assess risk, including a security risk or a risk of irregular immigration.
Administration of justice and democratic processes - i.a. AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.
The list of unacceptable risk and High-risk systems shall be updated through regular assessments conducted by competent authorities.
It’s also worth mentioning due to a recent legislative proposal, that the high-risk list has been updated to include, amongst other things, AI systems intended to be used to control or as safety components of digital infrastructure and AI systems intended to be used to control fuel emissions and pollution.
Certain systems in the limited/minimal-risk category are subject to transparency obligations.
The proposed AI Act focuses mainly on high-risk AI systems, which will not be strictly prohibited, but will be subject to strict compliance obligations, as well as technical and monitoring obligations.
In mid-January 2022, the French Presidency's latest compromise text on the Artificial Intelligence Act (AIA) emerged. It proposed changes to articles on risk management systems, data management, technical documentation, record keeping, transparency and provision of information to users, human oversight, accuracy, robustness and cyber security.
The most important changes:
High risk systems
High risk systems that should comply with the requirements set out in the AI Act have been significantly modified. It is proposed that compliance with the AI Act should also mean taking into account the generally acknowledged state-of-the-art technology, as reflected in relevant harmonised standards or common specifications.
Risk managment
In terms of risk management systems, the project clarified which elements should be taken into consideration when creating such solutions, especially in the context of identifying AI-specific risks. Risks identified within such systems should be limited to those that can be mitigated or eliminated through the process of creating and developing high-risk systems or through the use of appropriate technical documentation.
Data governance
It has been proposed that for the development of high-risk AI systems not using techniques involving the training of models, requirements shall apply only to the testing of data sets. The reason for this is that training, validation and testing data sets can never be completely free of errors.
It has also been proposed that AI systems use data minimization (as laid out in the GDPR Regulations) or limit the collection of personal data to what is strictly necessary, throughout the lifecycle of the AI system. This means that personal data used for these purposes will need to be limited to the purposes of processing.
Technical documentation
The proposal includes developments in order to provide more flexibility for SMEs and start-ups with regards to compliance with technical documentation.
Transparency
High-risk systems should be created in a way that ensures compliance with the requirements of the regulations, but also allows users to understand what such an artificial intelligence system is all about. It is proposed to clarify the scope of information that should be included in the manual accompanying the system itself.
In order for it to become legally binding, the AI Act must go through the EU’s ordinary legislative procedure, which requires the consideration and approval of the proposed Regulation by the Council and the European Parliament. Once adopted, the AI Act will come into force twenty days after it is published in the Official Journal. However, the draft proposes a period of 24 months before the law will apply.
The AI Act will be directly applicable in all EU countries and will not require implementation into local laws of member states.
In the spirit of the draft AI Act, the EU Parliament adopted on 6th October 2021 a non-binding resolution concerning the use of artificial intelligence by the police and judicial authorities in criminal matters. In that resolution, the Parliament called for, amongst other things, a ban on the use of facial recognition technology for law enforcement purposes which leads to mass surveillance in publicly accessible spaces. Read more here.
About In this White Paper, we described a monitoring and observing data platform in case of continuously working processes. What you will find there…
Read moreCommunication. Respect. Integrity. Innovation. Strive for greatness. Passion. People. Stirring words, right? Let me share a tale with you about how we…
Read moreBig Data Technology Warsaw Summit 2020 is fast approaching. This will be 6th edition of the conference that is jointly organised by Evention and…
Read moreThe 8th edition of the Big Data Tech Summit is already over, and we would like to thank all of the attendees for joining us this year. It was a real…
Read morePlease dive in the third part of a blog series based on a project delivered for one of our clients. Please click part I, part II to read the…
Read moreThe client who needs Data Analytics Play is a consumer-focused mobile network operator in Poland with over 15 million subscribers*. It provides mobile…
Read moreTogether, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.
What did you find most impressive about GetInData?