In summary, the latest version of the EU AI law weakens regulatory oversight and accountability of developers, as well as undermines trust and security, putting users at risk.
European lawmakers are moving forward on what could become one of the most important laws of this generation. EU AI Law takes an approach significantly more proactive in terms of regulation than current offerings in the United States and the United Kingdom. However, experts have identified a significant gap in the law amendments that, instead of protecting citizens and societies, could expose them to AI-related risks.
In short, this gap may compromise the very purpose of the proposed law and therefore needs to be filled. To do this successfully, lawmakers must take steps to prioritize machine identities as a means to improve AI governance, accountability, security, and trust. Hurry up.
What happened ?
It is encouraging to see European lawmakers tackling what will be one of the biggest technological challenges the world has ever faced as AI becomes increasingly integrated into IT systems. The approach of the legislators was to create a framework based on the level of risk of AI systems. Systems deemed to pose an “unacceptable risk” will be banned altogether, while those that pose a “high risk” will have to undergo multiple assessments and registrations before they are approved.
The law originally included significant safeguards. Any AI system should be considered high-risk if it has been used for any of the high-risk purposes listed in Appendix III Bill. Developers and users of these systems must ensure that the technology is safe, free from discriminatory bias, and contains publicly available information about its operation.
Things have changed in the latest version of the bill.
Thanks to a new loophole, developers will be able to decide for themselves whether their systems pose a high risk or not. More than 100 CSOs (Civil Society Organizations) have send a call to remove this flaw and restore the original system. They mention the fact that some unscrupulous or ill-informed developers could circumvent the law’s basic criteria by self-certifying as “limited risk” or “minimal risk.”
It would also lead to legal uncertainty about what counts as ‘high risk’, as well as fragmentation of the different markets in the area, as each Member State may interpret the concept of high risk differently. Local authorities may also find it difficult to effectively monitor the self-assessment of developers.
Reduce risks and strengthen accountability
In summary, the latest version of the EU AI law weakens regulatory oversight and accountability of developers, as well as undermines trust and security, putting users at risk. But beyond this breach, which needs to be closed as quickly as possible, lawmakers will need to go further to reduce risks and strengthen accountability in the fast-growing AI industry.
An example of strict control is a circuit breaker. Very common in manufacturing processes, chemical processes and even gas stations, the circuit breaker allows you to safely stop a dangerous situation that could get out of control. In the case of AI, it’s not a physical switch, but rather a way to prevent the machine from using an identity model for authentication.
Clearly, this is not a single kill switch for an AI model. In fact, there can be thousands of machine identities associated with a single model, associated with the data that forms the model, the model itself, and its outputs. These models must be secured at every stage, both during training and in use, meaning that every machine, during every process, requires an identity to avoid “contamination”, tampering or unauthorized access.
This will naturally increase the number of machine identities in networks and this is where AI can also help. Many of these same principles can be applied to prevent AI from being modified or tampered with, as is the case today with applications on smartphones or laptops.
The obligation to do better
Members of the European Parliament will have an uphill task if they fail to close the gap in EU AI legislation. But this is only the beginning. Closer collaboration with industry is needed to highlight the potential value of existing robust technologies such as machine identities that can strengthen AI governance and accountability.