Intro: Did I Finish...?
In 2016, I began writing a book that imagined a future deeply intertwined with artificial intelligence. A future where, from the moment of birth, every person is paired with a digital AI twin.
This companion, growing and learning alongside โ from baby to adult โ becomes intimately familiar with every aspect of our personality. From every baby mistake, teenage crush, marriage and death; a symbiotic relationship. Stargate's goa'uld, anyone?
However, the story soon took a darker turn, into a reality where the absence of boundaries for AI led to a dystopian world dominated by surveillance. Through the eyes of the digital twin, privacy became a relic of the past, painting a cautionary tale of a society entrapped in the very technology that once promised liberation.
Did I finish...?
The AI Act โ It Does Matter
The development of AI goes very fast. It gives us many good things, but there is also a dark side. The European Union has recognised this and is leading the way by setting the first AI rules: the AI Act.
Writing an article about law might not seem thrilling. Honestly, going through the legislation required a lot of effort from me.
But it's actually really important and pretty interesting when you think about how it shapes the future of AI. As a CTO, I'll need to address the AI Act sooner rather than later.
So, the European Union has made a new set of rules for AI. It's called the AI Act.
Rules โ Why Should I Care?
The AI Act affects everyone: from those who make policies to those who create AI. From big companies and small companies to us, the regular users.
In this article I will try to explain the impact as simply and straightforwardly as I can.
We'll look into what the EU wants to do with these rules and what it means for the future of AI in Europe.
Overview of the AI Act
You can imagine the AI Act as a rulebook for artificial intelligence. It tries to make sure that, as we use more AI, we can use it safely.
The AI Act has two main goals:
- It wants to help new AI technologies grow and become better.
- It tries to make sure that this growth doesn't hurt our values, safety, or fundamental human rights.
So, the AI Act describes ground rules. It tries to find the right balance between AI innovation and making sure it's safe and fair for everyone.
Key Aspects
When you read the AI Act it becomes clear that it is a thoughtful plan designed to make sure AI helps us without causing anyone harm.
There are a few key aspects:
Values and rights: Systems should protect our privacy, treat us fairly, and not harm our democracy or the rule of law. Besides basic rights, it should also protect the environment โ which will be very challenging, looking at the energy consumption of these systems.
Innovation friendly: We should have strong rules for AI and still have a space where new technologies can grow. The Act is designed to encourage companies to come up with new, safe AI technologies.
Avoiding fragmentation: Every EU country could have its own AI rules, making it hard for AI services and goods to move freely. The Act creates one set of rules, helping to avoid confusion and making it easier for AI products and services to be available everywhere in the EU.
High-risk focus: The AI Act follows a risk-based approach. High-risk AI systems โ systems that could pose a big risk to people's safety or rights โ have to follow stricter rules.
Rules for everyone: It doesn't matter if an AI company is inside the EU or in another country. The rules affect all.
Protections and bans: AI that could manipulate people or invade privacy is banned. AI should be there to help us, not to trick or watch over us.
Helping people understand AI: The AI Act wants to make sure that people know when they're interacting with AI โ watermarks, in text and photos. It also wants people who use AI in their work to understand how it works. This is about making AI more transparent.
These key aspects form the foundation of the AI Act.
Categories of AI Systems
Not all AI systems are the same, right? True. So AI systems are sorted into different categories based on how risky they might be. There are four categories:
Minimal or low risk AI systems: Systems in this category have a small chance of harming people's rights or safety. These AI systems are encouraged to follow good practices, for example being transparent with users. Most AI systems will fall into this category.
Limited risk AI systems: AI that might pose some risks falls in this category โ chatbots, for example. It must be clear to users that they are interacting with an AI system and not a human being.
High risk AI systems: AI technologies that could have a big impact on people's safety or fundamental rights. Examples include AI used in healthcare, policing, or job hiring. These systems have to follow the strictest rules. More on this below.
Prohibited AI practices: Some systems are too harmful and are therefore not allowed. This includes AI that can manipulate people, systems that watch people and break privacy, or AI that categorises people in unfair or discriminatory ways. These are red lines that are not allowed in the EU.
I think this is a logical way of categorising AI. It helps to focus on the applications with the most risk, making sure they are safe and fair, while still allowing room for new and innovative AI technologies to grow.
High Risk AI Systems
High-risk AI systems are systems in areas where, if something goes wrong, it could really affect people's lives โ their health, safety, or fundamental rights. Because of their potential impact, these systems have to meet higher standards.
An AI system is considered high-risk if it's used in important sectors like healthcare, transportation, or justice, and can significantly influence people's lives. Examples include AI that helps doctors diagnose diseases, or AI used in self-driving cars.
What are the rules regarding high-risk AI?
- Strict requirements: Transparency, reliability, and security. These systems need to have clear information about how the algorithm works and must be designed to avoid errors.
- Testing and documentation: The systems need to go through rigorous testing to check that they're safe for use. They also need detailed documentation so that anyone using them understands exactly how they work, what they can do, and what they cannot.
- Human oversight: There should always be a human in the loop. Decisions made by high-risk AI systems should be overseen by people who can step in if needed. AI supports human decision-making โ it doesn't replace it.
- Data and privacy protection: Like the GDPR, data should be used responsibly. Systems should protect people's privacy and make sure the data they use is accurate and handled in a way that's fair and respects human rights.
- Continuous monitoring: After a high-risk AI system is put to use, it must be frequently checked to make sure it's still safe and working as it should. If any risks are found, local authorities need to be informed and the issues fixed quickly.
Prohibited AI Practices
"Here's what AI should never do."
Not everything is allowed. The AI Act clearly outlines certain uses of AI that are considered too harmful and therefore not permitted.
Manipulative AI: AI that manipulates people's decisions in a way that could cause harm is banned. This includes AI that takes advantage of vulnerable people, leading them to make decisions that could hurt themselves or others. TikTok algorithm, anyone?
Social scoring: AI systems that governments could use to score people based on their behaviour or personality traits โ a big no. This kind of social scoring could lead to unfair treatment or discrimination and is thus not allowed.
Mass surveillance: Using AI for surveillance that doesn't respect individuals' privacy rights is prohibited. This includes AI that can track people in a way that doesn't target specific criminal activities and lacks necessary safeguards.
Unjust biometric identification: The AI Act strictly limits the use of AI for real-time biometric identification in public spaces. Preventing a threat to public security is excluded and allowed โ but even then, strict checks and balances must be in place.
These rules guide developers and users towards responsible AI, making sure that as AI technologies advance, they do so in a way that's good for everyone.
Supporting Innovation and Research
AI has enormous potential to improve our lives, from advancing healthcare to protecting the environment. So the AI Act also needs to nurture innovation, not push it away.
Here is how the AI Act tries to nurture innovation:
Regulatory sandboxes: Safe environments where AI developers can test new technologies under regulatory supervision but without constraints.
Exemptions for research: Allowances for AI systems developed solely for research purposes. These exemptions aim to ensure that the pursuit of knowledge is not hindered.
Support for startups and SMEs: The AI Act includes measures to reduce the regulatory burden on smaller players, helping them grow and compete in the global market.
Focus on high-risk AI: By concentrating the most stringent regulations on high-risk applications, the AI Act allows for greater freedom in areas where AI poses less risk.
International collaboration: The AI Act also looks beyond EU borders, working with other countries and international organisations to share best practices and align AI development globally.
Regulation should not slow down innovation but guide it in a direction that is beneficial for society as a whole.
Governance and Enforcement
Without proper oversight the Act will not work. So governance and enforcement is a crucial part of making the AI Act work.
European Artificial Intelligence Board (EAIB): The Board will play a key role in ensuring consistent application of the AI Act across all EU member states. It's made up of representatives from each country and the European Commission. The Board will advise on AI matters, share best practices, and help coordinate the work of national supervisory authorities.
National supervisory authorities: Each EU state needs to designate one or more national authorities to oversee implementation of the AI Act. They will monitor AI systems, ensure compliance, and take action against violations.
Transparency and reporting: Providers of high-risk AI systems need to register their systems in an EU database. At this moment it's unknown which systems are operating in Europe โ so this will make it easier for authorities to monitor and ensure compliance.
Support and resources for compliance: The EU plans to provide resources and guidance for businesses, especially startups and SMEs, which represent 99% of all businesses in Europe.
Public involvement and AI literacy: The governance framework also emphasises the importance of public engagement and AI literacy. By educating citizens about AI and its implications, the EU aims for an informed public discourse on AI.
What if a Company Doesn't Comply? Penalties.
Breaking the rules leads to penalties.
If a company โ let's say Apple โ doesn't tell the truth about how its AI works or ignores regulations, it could face a fine of up to โฌ30 million or 6% of its total global sales from the last year.
Apple, with sales over $365 billion in 2021, could face a penalty of up to $21.9 billion.
Not following the strict rules for high-risk AI could lead to a fine of up to โฌ20 million or 4% of annual global turnover. Even smaller mistakes, like forgetting to register a high-risk AI system, could lead to fines of up to โฌ10 million or 2% of annual global turnover.
Conclusion and Personal Reflections
The AI Act is an ambitious piece of legislation. By setting out clear rules for artificial intelligence, the EU seeks to foster an environment where AI can develop and grow โ but not at the expense of individual rights or societal values.
From my perspective, the AI Act is a necessary step towards responsible AI development and use. The focus on high-risk AI applications is there to protect citizens from the most significant potential harms.
However, I followed a Reddit discussion questioning whether the Act goes far enough in certain areas, or whether its definitions and categories are sufficient to cover the whole AI spectrum. There is a delicate balance between regulation and innovation, and some may feel the Act leans too much one way or the other.
The rapid pace of technological advancement may require continuous updates. For example the rapid development of humanoids like Figure01. If we continue this development, will they eventually have their own rights?
The effectiveness of the AI Act will largely depend on its implementation and enforcement. The creation of the European Artificial Intelligence Board and the role of national supervisory authorities are crucial, but their success will depend on resources, cooperation and commitment.
Nevertheless, the AI Act sets the stage for other non-EU countries to follow โ and that's a good thing.
โฆwell, I guess I will never finish the book. ๐
Originally published on LinkedIn.
Further reading: Cloud Sovereignty Was on Every List โ on the geopolitical risks of hyperscaler dependency. And take the AI Readiness Scan to see where your organisation stands on governance.
Back to blog