Artificial Intelligence Regulation in the European Union
Diverse developments in artificial intelligence range from robots to self-driving cars to virtual assistants such as SIRI (Apple), Google Assistant (Google) and Bixby (Samsung). Artificial Intelligence Ethics Guidelines, produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), were released in December 2018 with a final version due for release in March 2019. The AI HLEG defines Artificial intelligence (AI), sometimes called machine intelligence, as “systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions”.
The key underlying principles identified for future regulation of AI include the recognition and protection of human dignity, autonomy, the rule of law, safety, accountability and sustainability. The European Union is seeking to stay at the forefront of the AI technological revolution and to ensure its competitiveness, particularly with the USA.
Commission Vice-President for the Digital Single Market Andrus Ansip said:
“Step by step, we are setting up the right environment for Europe to make the most of what artificial intelligence can offer. Data, supercomputers and bold investment are essential for developing artificial intelligence, along with a broad public discussion combined with the respect of ethical principles for its take-up. As always with the use of technologies, trust is a must.”