April 2021 saw the proposal of a very 21st-century piece of regulation: the European Commission’s Regulation on Artificial Intelligence.
As artificial intelligence (AI) becomes increasingly adopted and commonplace, particularly in the life science sector, it’s unsurprising that the EU is proposing a standardised regulatory framework to begin controlling and governing it.
Is your business using AI, or involved in the release of AI systems for use elsewhere? Here’s what you should know.
What is AI?
‘AI’ immediately conjures futuristic images of advanced robotics and supercomputers. But the European Commission’s definition of artificial intelligence, which takes its cue from the OECD, is a little less fantastical. It defines AI as: any piece of software which generates an output based on a statistical, logical, knowledge-based or machine-learning approach.
With this definition, AI is already widely used and distributed across the EU. And the draft regulation places special emphasis on AI systems which pose a risk to:
- a) safety, and
- b) fundamental rights
The label of ‘high-risk’ AI systems is therefore largely reserved for those in the health and life sciences space, as these systems pose the greatest risk and threat to safety should they fail to work as intended.
The EU’s Medical Device Regulation and In Vitro Diagnostic Regulation have already acknowledged the increasing entwining of software with medical device development and operation – the draft AI Regulation aims to tackle potential threats to patient safety from the other direction by regulating how such software is developed and used.
Example high-risk AI systems
Medical and pharmaceutical AI systems are already widespread and fall under the ‘high-risk’ AI category. Machine-learning is widely adopted to automate and simplify key medical tasks.
Examples include:
- Analysing images such as CT and MRI scans to diagnose conditions
- Scanning tissue samples to count/recognise cell types
- Analysing ECG and EEG signals to diagnose heart and brain conditions
- Analysing genetic data to recommend and prescribe treatment
Medicinal software uses a wide range of AI techniques and methods, as this diagram from a major study demonstrates:
As these systems directly impinge on patient safety, they are the primary targets of the EU’s draft regulation.
How will AI be regulated?
The European Commission’s proposal maps out 6 key regulatory touchstones to be applied to high-risk AI system providers:
- Risk management: Unsurprisingly, risk-based thinking and robust risk management will be key. A documented and effective risk management system should underpin the end-to-end lifecycle of every AI system your business develops and provides.
- Security: Closely connected to this, the AI system lifecycle should be underpinned by clear procedures and processes that maximise security and accuracy. Market surveillance authorities should be notified of security events as they occur.
- Transparency: We’ve already seen that generating an output is a key definition of an AI system. That output should be transparent, clear and unambiguous. Complete instruction documentation should be provided to all users of the AI systems, so its capabilities and functionality are fully understood.
- Accountability: Technical documentation should be distributed by providers and maintained by users. The outputs of the AI system should follow ALCOA+ principles with complete audit-trailing and traceability. Any system must be registered in the EU’s high-risk AI system database.
- Testing: In a similar way, any system data used for testing must be complete, accurate and representative.
- Human review: We don’t want another Blade Runner. The regulation takes its cue from the GDPR’s guidance on automated decision-making and stipulates that AI systems must be designed so that human oversight, intervention and review are in place at all times.
Users of high-risk AI systems will also be subject to strict rules and limitations as follows:
- Appropriate use: The system should be applied only in line with the provider’s instructions and its intended operation
- Relevance: Any data input into the AI system should be relevant to the system’s intended operation
- Monitoring: System operation should be continuously monitored, with risks and risk events reported to the distributor as soon as possible
- Documentation: All system logs should be properly stored and secured
And the regulation lays out a string of basic governing principles for all AI systems in general:
- Codes of Conduct should encourage, but not force, low-risk AI system providers and users to adhere to high-risk AI rules
- Individuals must be aware they are interacting with an AI system
- Individuals must be aware if biometric or emotional categorisation is being performed by the system
- Any artificial creation or alteration of pre-existing content must be disclosed
- Subliminal ‘dark pattern’ techniques and micro-targeting activities which influence or distort human behaviour are prohibited
Enforcement
The draft proposes a European Artificial Intelligence Board to coordinate and oversee national regulatory bodies. Member state authorities will be responsible for long-term market surveillance of AI systems, and for investigation and corrective action if a system is found to pose a risk to safety and/or to fundamental rights.
The AI regulation takes a leaf from the GDPR’s book too, proposing proportionate turnover penalties as well as flat fees for infringement – in this case €30m or 6% of annual global turnover, whichever is higher.
Next steps
The draft is just that: a pending and incomplete step toward full enforcement.
But your business should review the requirements and directives of the regulation and ensure you aren’t in breach of any of the regulation’s guidelines. Organisations employing or distributing high-risk AI systems, particularly those in the life science and medical device sectors, should prioritise this early review period, since they are at the greatest risk of punishment and financial penalty should they fail to comply with a live regulation.
The emphasis on personal safety and the protection of fundamental rights continues the broad direction of recent European legislation. Businesses in or connected to the EU can expect a further tightening of regulation along these lines in the coming years.