Hi there, Portugal Tech League players. We all know Artificial Intelligence has the transformative power to help solving some of our society’s most enduring problems, from health to climate and environment-related issues. As it increasingly becomes commonplace in a wide range of fields, it was crucial to deliver on a regulatory model which mitigates risks and actively supports AI innovation.

Those were the expectations until the European Commission (EC) published the draft Artificial intelligence Act (AIA), last 21 April.

The genesis of the AIA dates to 2017, when the EC was tasked to develop a European AI Strategy. Since then, a significant body of work was produced. On 19 February of 2020, the White Paper on Artificial Intelligence anticipated key elements of the AIA, strategically published with the European Data Strategy, as its first pillar.

We all know the crucial relevance of data to AI systems. Some Member States called for tightened AI regulation at the EU level, while others stressed soft-law instruments to avoid stifling innovation through overregulation.

Essentially, the EC´s draft Artificial Intelligence Act lays out a horizontal law, targeting the placing of all AI systems in the EU market. But how?

The proposal is based on a four-tiered pyramid, shaped and divided following a risk-based approach which is proportional to the intended use of the AI system. So, the higher the risk to safety, health and fundamental rights, the stricter the rule. Thus, certain uses are banned, high-risk uses are heavily regulated, and low-risk uses are lightly regulated. Obviously, high-risk uses constitute the primary target of the AIA setup.


The logic behind the four-tiered risk pyramid proposed by the European Commission


So a few key questions arise, in our understanding:
. how to define “high” or “low” risk?
. how could your business comply with it, depending on your size and resources?
. how could we develop AI algorithms without GDPR-proof data?

This skeleton outlines the internal logic of the law, meant as the future normative ground in case best-case scenario by the second half of 2024 you intend to place an AI system in the EU market, irrespective of where you are based.

So let us add a bit more flesh to your reading.

You should know that such logic relies on two main things: a definition of AI system, with a tree of secondary definitions of each of the four risk levels; and on a corresponding set of legal obligations and requirements to follow.

On the definition, it is as broad as open for debate. The material scope tends to get lost on the horizon and the full scope and impacts are difficult to anticipate at this point. But you should know an important thing for your business, either you are a developer or a user: if the AI system you are developing and/or using falls within the scope of the proposal, that does not mean that you will automatically be subject to novel obligations or that the product or service you are developing needs to comply with new technical requirements.

Sounds contradictory, but this is actually the main positive stemming from the proportionality of the skeleton so far. You should know that more and new obligations and requirements on the system are conditional to the “level of risk” emerging from the use of AI systems in your business. If your AI system falls within the high-risk level, then your system and you should comply with certain ones, and if it falls on a low-risk level you and your system should comply with a different set, less strict, of course.

High-risk AI obligations and requirements are a lot. Some technically unfeasible: for instance, to give you an example, your AI system would have to be trained on datasets that are “complete, representative, and free of errors” – it would be almost funny if it were not too serious. Others, financially burdensome: if you are a user, you should build a quality management system, and if you are a small-scale provider/user, you should expect a significant rise in the final invoice.

On the other hand, if you are a startup, you should know that one of the two fundamentals of the AIA is excellence, not exactly siding but slightly under trust in terms of priority. Unfortunately, supporting innovation deserves only 2 articles out of the 85 of the Artificial Intelligence Act: the first focuses on regulatory sandboxes, and the other on easing access for market entry. An outright imbalanced approach, even formally.


Proposed procedure for the development and placing on the EU market of a high-risk AI system


Such a complex architecture needs to be implemented and enforced. An AI Board to be created might facilitate implementation, but Member State´s market surveillance authorities and other competent authorities hold significant post-market executive power under the AI Act. Not easy to imagine considering the resources which are traditionally allocated to competent authorities. This enforcement mechanism is muscled with a system of fines and penalties, which, in specific cases, can go up to 30,000,000 EUR or 6% of total worldwide annual turnover for the preceding financial year, whichever is higher.

Be that as it may, if you deal with AI-enabled solutions in the fields of financial services, education, employment and human resources, law enforcement, industrial AI, medical devices, the car industry, machinery, toys, etc., you should already expect significant impacts on your operations considering the Act as it is now.

As a world-first attempt at a horizontal regulation in this field, the AI Act will have far-reaching implications for organizations developing and using technologies and products incorporating AI. Not only to Europe-based ones but also to any player with businesses within the single market.

With the European market for AI being roughly a fifth of the world, the European Commission wants to ensure trust along the supply chain and effective citizens’ protection without hindering AI´s socioeconomic benefits, thus providing European businesses with a first-mover advantage in the global competition. And this is relevant: if enacted, the legislation is expected to act as a blueprint for other jurisdictions, which might give EU digital players some competitive edge.

That’s why a convergent voice coming from the ecosystem is important – and you are invited to take part: that’s why we are here. Share your contributions to this discussion through the respective form on our website, and feed our position paper in the making by doing so – you‘ll be cited in the end. Also, get to know more about the proposal through different perspectives in our podcast in early October: come out and play in this League.

It takes engagement to create a community.

You can suggest a topic, receive PTL’s updates or just contact us if you need anything:

Suggest a new Topic

If there is a specific matter you’d like Portugal Tech League to dwell on, this is the form you want to fill.

Subscribe to our Newsletter

Portugal Tech League’s newsletters are meant to update the community about the topics being approached, the developments on digital policies and any other interesting themes and events. This is where you can sign up to receive it.

Send us you enquiries

If you have any questions or just wish to reach out, this is the best way to contact Portugal Tech League.