Towards trusted Artificial Intelligence: the European Union at the forefront with the AI Act

The challenges of AI regulation

As we stand on the cusp of a perhaps unprecedented technological revolution led by artificial intelligence, the European Union is positioning itself as a pioneer by introducing the AI Act, moving towards a trusted Artificial Intelligence. This legislation underlines the EU’s commitment to setting a standard for AI’s responsible and ethical use on its territory. The issue of AI regulation goes beyond Europe’s borders. Nations worldwide recognise the need to regulate the usage of these powerful technologies. The EU’s AI Act may become a model for many countries seeking to balance innovation and ethics. The EU is not just proposing a model but showing it is possible. Other nations and continents will follow suit. For example, the Australian AI Act (currently under development) aims to address the same issues. This regulation has a clear ambition: to harmonise the economic growth promised by AI with the unwavering respect for fundamental rights. These issues around the AI Act are vast, covering aspects such as digital sovereignty, trust in AI, and the need to maintain innovation and digital skills. 

Let’s take the example of a European company developing an intelligent virtual assistant. To ensure digital sovereignty, this assistant would be powered by data hosted on European servers, ensuring that citizens’ information remains under the jurisdiction of the EU and protected by its strict privacy laws. When it comes to a trusted artificial intelligence, let’s imagine that this assistant uses transparent and verifiable algorithms that clearly explain the decisions made, building users’ trust in the technology they use daily. Finally, to support innovation and digital skills, initiatives such as partnerships between universities and businesses could be encouraged to develop adapted curricula that prepare the next generation to create and manage responsible and AI-compliant AI. This balance between growth, ethics, and education is the cornerstone of Europe’s approach to AI regulation. 

The EU AI Act: A first response to the challenges of AI Inherent risks related to artificial intelligence usage

Inherent risks related to AI usage

AI presents a set of inherent risks that are essential to identify and regulate. Consider the following summary as an introduction to a broader discussion on the multifaceted challenges presented by artificial intelligence: 

  • Discrimination and Bias: AI systems can perpetuate and amplify biases in their training data, leading to discrimination in areas such as employment, education, and criminal justice. In that way, how would you feel if you knew an AI system used for hiring at your dream job favoured candidates from certain backgrounds, potentially overlooking your qualifications due to inherent biases in its programming?  
  • Violation of Privacy and Fundamental Rights: AI technologies, particularly those involving surveillance and data analysis, can intrude on individual privacy and breach fundamental rights. This includes unauthorised collection of personal data and the use of facial recognition without consent. Imagine discovering that an AI system has collected and analysed your personal data without your consent, perhaps even through facial recognition technology in public spaces. How would this invasion of privacy affect your trust in technology and governance? 
  • Manipulation and Disinformation: AI can generate realistic fake content, such as deepfakes or synthetic voices, which can manipulate public opinion, spread misinformation, and impersonate individuals, undermining trust in media and institutions. What would be your reaction if you found out that the news story you just shared was created by AI, using deepfake technology to spread misinformation? How would this impact your trust in the media and the information you receive daily? 
  • Loss of Autonomy: The pervasive use of AI in decision-making can lead to a decline in human autonomy and free will as individuals increasingly rely on AI for choices and recommendations, potentially leading to a dependency on technology. How would you feel about an AI system making significant life decisions for you, such as your career path, educational experiences, or personal relationships, potentially reducing your own ability to make free choices? 
  • Increase in Cyber Attacks: AI enhances the capability of cyber-attacks through methods like phishing, deepfakes, and automated mass emails. Additionally, AI-specific threats like data poisoning can corrupt the integrity of AI systems, leading to unreliable or compromised AI decisions. How secure would you feel knowing AI can enhance the effectiveness of cyber-attacks, like phishing or data breaches, that could directly impact your personal and financial information? 

A risk-based approach

In this context, the text of the AI Act presents an ambitious EU regulatory response to these challenges. It proposes a balanced framework that can no doubt be improved, encouraging respect for digital sovereignty while promoting trusted artificial intelligence. This trust is essential to guarantee fundamental rights and impose a risk-based approach to the potential abuses of AI. The AI Act aims to hold providers and users of AI systems accountable, prevent the risks inherent in its mass use, and protect the transparency and security of citizens. It applies to providers and users in the EU and non-EU actors when their AI systems interact with European users. 

The EU has agreed on a two-tiered approach, with transparency requirements for all general-purpose AI models and stricter standards for powerful models with systemic impacts on our EU single market. 

Thus, Commission proposed the first-ever legal framework on AI by categorising the risks associated with AI systems into four levels – unacceptable, high, limited and minimal – and adapts regulatory obligations accordingly, prohibiting or regulating the use based on the risk assessed. 

High risk use cases - towards a trusted artificial intelligence

For example, the EU has identified various high-risk use cases, such as some uses of AI in law enforcement, labour, and education, where it sees a particular risk to fundamental rights. 

A set of obligations for trusted artificial intelligence

The penalties for non-compliance under the AI Act are significant, exceeding those of the General Data Protection Regulation (GDPR): “non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the infringement and size of the company”, says the deal on comprehensive rules for trustworthy AI. 

The AI Act represents a significant legislative framework to build a trusted artificial intelligence in the European Union. This legislative act includes a set of obligations spanning the entire lifecycle of an AI system – from its design to its deployment to its post-market phase. From the earliest stages of creation and development, it is imperative to establish rigorous data and risk governance, ensure automatic event logging, and measure the environmental impact of the technologies deployed. Before being placed on the market, the systems must be subject to organisational and technical measures against cyber risks, ensure compliance through the CE marking, and, if applicable, abide by a GDPR declaration of conformity to protect personal data. Finally, once AI is up and running, continuous monitoring and rapid incident response capability, with mandatory notification within 72 hours and effective resolution of malfunctions, are required to maintain trust in these advanced systems. This ongoing commitment to responsible AI ensures that technological innovation moves forward in line with European values and ethical standards. 

What this means for organisations

For organisations, AI is increasingly emerging as an essential lever for innovation. It is Omnipresent in our daily lives (often without us realising it), serves the people, boosts business, and supports the public interest. However, this rapid progress brings new risks to security and fundamental rights, reinforcing the need for appropriate and well-thought-out regulation.

Challenges to be addressed by the organisation to move towards a trusted Artificial Intelligence

The challenges of artificial intelligence for organisations are considerable. The main challenges are:

Training & Awareness

Organisations must invest in training and awareness for their teams, especially those in charge of audit, risk, compliance, and data (i.e., Data Protection Officer). All stakeholders involved must understand the implications of AI and be trained on the specific regulatory requirements of the AI Act to ensure the ethical and compliant use of AI. 

Evaluation and Evolution of Existing Systems

Organisations need to conduct a detailed inventory of their AI systems, categorising them by risk level. The co-construction of a framework for all or part of the aspects of trusted artificial intelligence will make it possible to draw up an accurate inventory of the situation and establish a strategic action plan to meet compliance requirements. 

Governance and Sustainability of Trusted Artificial Intelligence

Robust governance is essential for the sustainable deployment of trusted Artificial Intelligence. This involves defining processes and controls both upstream (a priori when designing or purchasing from a third party) and downstream (a posteriori, such as second and third lines of defence) to ensure continuous monitoring and alignment with the organisation’s framework.  

These steps are fundamental to building a solid foundation of trusted artificial intelligence within organisations, ensuring that AI technologies are performant, resilient, and managed ethically and in compliance with applicable regulations. 

An AI governance framework to address these challenges

AI governance, maintenance, and auditability are essential for responsible AI practice. This governance framework can be based on four pillars: ethics, regulation, sustainability, and robustness. 

These steps are fundamental to building a solid foundation of trusted Artificial intelligence within organisations, ensuring that AI technologies are performant, resilient, and managed ethically and in compliance with applicable regulations. 

Ethics

AI ethics are fundamental to ensuring respect for human autonomy, harm prevention, fairness, and explainability. This involves developing and deploying AIs that make fair decisions, can be explainable, are transparent in their processes and outcomes, and are designed to complement human intelligence rather than replace or constrain it. 

A concrete detail of the development and deployment of ethical AI, specifically related to ensuring AI algorithms do not create biases, could look like the following example related to the implementation of Ethical AI in Recruitment:  

  • Design Phase: During the design phase, the AI system is programmed with algorithms that actively identify and mitigate potential biases, made possible by perfect upstream data mastery. This can be achieved by integrating diverse datasets and applying fairness criteria that can balance any skewed representations of gender, ethnicity, or age, or by removing data that induces such biases from the algorithms. 
  • Training Phase: The AI is trained on a balanced dataset where diversity is well-represented, and iterative testing is performed to identify any implicit biases. Developers might use techniques such as counterfactual fairness to understand the impact of sensitive attributes on the decision-making process. 
  • Deployment Phase: Before full deployment, the AI recruitment tool undergoes a rigorous audit by third-party experts to ensure that the algorithm does not perpetuate any form of discrimination. This audit includes reviewing the AI’s decision-making process to ensure it complies with ethical guidelines and legal requirements. 
  • Feedback Loop: The system includes a feedback mechanism where candidates can report any perceived biases or unfair treatment. This feedback is used to further refine the AI’s algorithms and training datasets. 
  • Monitoring Phase: Once deployed, the AI system is continuously monitored. All decisions made by the AI are recorded and reviewed periodically to ensure ongoing fairness (which requires an efficient system of tracking and lineage!). The company establishes a monitoring committee, for example through its AI Factory, to evaluate the AI’s performance and to intervene if any discriminatory patterns emerge. 

Regulation

Adherence to legal frameworks such as the GDPR, the AI Act, and other relevant regulations ensures that AI systems comply with data protection, privacy, and intellectual property laws. These regulations help structure the use of AI within a clear legal and moral framework.  

A concrete detail could be the integration of a compliance module into the AI system that automatically checks for GDPR / local privacy laws and other regulation requirements before processing personal data. This could include checks on the data’s origin, obtaining user consent for its use, and applying the principles of data minimisation and purpose limitation. Furthermore, such a system could generate automated audit reports to demonstrate compliance for regulators and provide interfaces for users to exercise their data protection rights, such as requests for access, rectification, and deletion of their data. 

Sustainability

Sustainability aims to limit the direct and indirect environmental impact of AI systems. This includes balancing technology performance and carbon footprint, promoting green IT and a responsible digital trajectory for a sustainable future. 

A concrete detail could be the optimisation of AI algorithms to be more resource-efficient, thereby reducing the amount of energy needed to operate them. This could involve advanced coding techniques that allow AI systems to perform the same tasks with fewer computations, organising the hierarchy of models to limit the search field and thus make keyword search more efficient (see our article on HRAG), or even the use of data centers powered by renewable energies. Furthermore, GenAI governance could mandate regular audits of the energy efficiency of AI systems and encourage research and development into new, less energy-intensive technologies. 

Robustness

Robustness is about data management (accuracy, quality, control, access), attack resistance, security, reliability, and reproducibility. Contingency plans and general security must be in place to ensure that AI systems operate reliably and safely under various conditions and can withstand disruptions without compromising performance or data.  

A detail could be the implementation of advanced encryption protocols, to secure data at every stage of the AI process, from collection to analysis. Along with an effective data management framework to monitor master data, quality, lineage, etc., this would include the creation of data backup and recovery systems to prevent any loss in case of an incident. In parallel, regular tests of resilience to cyber-attacks would be carried out to assess the AI systems’ ability to defend against security breaches. Moreover, periodic audits of the AI systems would ensure their ongoing compliance with security standards and industry best practices. All this is nothing new but becomes all the more essential and reinforced with the advent of Generative AI! 

An example for the rest of the world

To conclude, the European AI Act  marks a turning point in the global AI legislative landscape, moving towards a trusted artificial intelligence. It is not only a pioneering regulatory framework for Europe but also a promising example for the rest of the world. This legislation is an excellent start to setting global standards in the management of AI, highlighting the importance of ethical and responsible use of this technology. His vision & approach could inspire other nations to follow the same path, adapting and adopting similar measures. Countries like Australia, which are already working on their own Australian AI Act, are examples of this growing trend to recognise and legislate the unique challenges posed by AI. 

This suggests that we are moving towards an era where technological innovation and ethical values go hand in hand, fostering a global use of a trusted artificial Intelligence, that respects human rights and promotes a just and equitable society. The EU AI Act is much more than just a regulation; it’s the catalyst for a global movement towards trusted Artificial Intelligence, where each country contributes uniquely to this shared mission. Together, we are on the path to a future where AI enriches humanity while respecting and protecting our core values. 

Quentin

Quentin

Quentin leads the initiatives related to Generative AI for onepoint in Asia-Pacific.

Scroll to Top