Skip to main content


(A.) Policy and legislation

(A.1) Policy objectives

Although there is no generally accepted definition of Artificial intelligence (AI), in 2019, the Organisation for Economic Co-operation and Development (OECD) adopted the following definition of an AI system: ‘An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications) or a combination of both.

We are using AI on a daily basis, e.g. to translate languages, generate subtitles in videos or to block email spam. Beyond making our lives easier, AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents1 to fighting climate change or anticipating cybersecurity threats. Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry.

The term AI was coined in 1956. Since then, the research on AI included a large variety of computing techniques and spread over many different application areas. Historically the development of AI has alternated some periods of fast development, called ‘AI springs’, with other periods of reduced funding and interest, called AI winters. Currently, AI is experiencing another spring, which is motivated by three main driving factors: the huge amount of available data generated by the world-wide-web and sensor networks, the affordability of high-performance processing power, even in low-cost personal devices, and the progress in algorithms and computing techniques. Another characteristic of the present AI wave is that it goes far beyond the research community and targets product innovations and business-oriented services with high commercial potential, which assures its sustainability.

The way of approaching AI will shape the digital future. In order to enable European companies and citizens to reap the benefits of AI, we need a solid European framework.

The new EU strategy on AI was published on 25th April 2018, in the Commission Communication on Artificial Intelligence for Europe. One of the main elements of the strategy is an ambitious proposal to achieve a major boost in investment in AI-related research and innovation and in facilitating and accelerating the adoption of AI across the economy.

The target is to reach a total of €20 billion in AI-related investment, including both the public and the private sector, for the three years up to 2020. For the decade after, the goal is to reach the same amount as an annual average. This is of crucial importance if we want to ensure that European industry does not miss the boat.

The other two main focus areas of the EU strategy on AI refer to preparing for socio-economic changes and ensuring an appropriate ethical and legal framework for high-risk AI systems. It is essential to increase the number of people with advanced skills in new digital technologies. It is also important to ensure the accessibility, interoperability, usability and affordability of AI technologies.  Safeguarding against potential reinforcement of discrimination against persons on grounds including, but not limited to, disability, sex orientation, gender identity, sex characteristics, age, social status, is another vital aspect that should be highlighted, as well as the individual security and privacy concerns. More broadly, it is important to give all citizens and workers every opportunity to acquire suitable skills for the digital economy.

In December 2018, the Commission presented a Coordinated Plan on AI with Member States to foster the development and use of AI. It represents a joint commitment that reflects the understanding that, by working together, Europe can maximise its potential to compete globally. The main aims set out in the plan are: to maximise the impact of investments at EU and national levels, to encourage synergies and cooperation across the EU, including and to foster the exchange of best practices.

In February 2020 the Commission issued a White Paper on AI. The overall EU strategy proposed in the White Paper on AI proposes an ecosystem of excellence and trust for AI. The concept of an ecosystem of excellence in Europe refers to measures which support research, foster collaboration between Member States and increase investment into AI development and deployment. The ecosystem of trust is based on EU values and fundamental rights, and foresees robust requirements that would give citizens the confidence to embrace AI-based solutions, while encouraging businesses to develop them. The European approach for AI ‘aims to promote Europe’s innovation capacity in the area of AI, while supporting the development and uptake of ethical and trustworthy AI across the EU economy. AI should work for people and be a force for good in society.

In particular, to create an ecosystem of excellence the Commission proposed to use Horizon Europe and the Digital Europe programme to support:

  • A new public-private partnership in AI, data and robotics;
  • The strengthening and networking AI research excellence centres;
  • The set up of AI testing and experimentation facilities;
  • AI-focused digital innovation hubs, which facilitate the uptake of AI by SMEs and public administrations;
  • An alliance of universities for the strengthening of AI (data science) skills
  • An increase in the provision of equity financing for innovative developments and deployment in AI through InvestEU.
  • Sector dialogues to facilitate the development of a new programme (‘Adopt AI’) to support public procurement of AI systems.
  • International cooperation with like-minded countries, companies and civil society on AI based on EU rules, values and safety requirements.

To create an ecosystem of trust the Commission proposes a comprehensive package of measures to address problems posed by the introduction and use of AI. In accordance with the White Paper and the Commission Work Programme, the EU plans to adopt three sets of inter-related initiatives related to AI:

  • European horizontal legal framework for AI to address fundamental rights and safety risks specific to the AI systems. This framework would propose a risk-based and proportionate regulatory approach;
  • EU legislation to address liability issues related to new technologies, including AI systems;
  • Revision of sectoral safety legislation to address issues related to new technologies, including AI. With regard to AI, they will complement the horizontal framework on AI, addressing primarily the aspect of integration of AI systems into physical products.
  • With regard to the European horizontal legal framework for AI, the main elements of the White Paper are as follows:
  • A risk-based and proportionate regulatory approach;
  • The identification of high-risk AI systems through a combination of (i) sector and (ii) concrete use of the system;
  • In certain cases, AI systems can be considered high-risk, irrespective of the sector (e.g. recruitment processes and remote biometric identification);
  • Mandatory requirements for high-risk AI systems only:
  • Training data should be of high quality and respect EU’s rules and values
  • Record keeping of the relevant data sets and of the programming and training methodologies
  • Provision of information about the AI system’s performance
  • Robustness and accuracy
  • Human oversight

For AI systems that are not high-risk, a voluntary label could be considered.

The contents and proposals of the White Paper were subject to a public consultation, which gathered more than 1200 responses. 

(A.2) EC perspective and progress report

AI is a field that has had little standardisation activities in the past. However, the big increase in interest and activities around AI in the latest years brings together a need for the development of a coherent set of AI standards. In response to this, ISO and IEC has created a standardisation committee on AI, namely ISO/IEC JTC 1/SC 42, which is most active in the field of AI and big data. A CEN/CENELEC Focus Group on Artificial Intelligence (AI) was also established  in December 2018 and roadmap for AI standardisation was published.  The professional association IEEE is also very active in investigating and proposing new standards for AI, particularly in the field of ethics.

As a follow-up to the White Paper, the Commission plans to present a proposal for a new regulatory framework on AI in Q1 2021. It is expected that adequate standardisation shall be available to support that framework by the time it will become applicable. As a consequence of this, the most likely areas where new AI standards will be required are the ones which are addressed by the future requirements of the AI framework. They will be primarily: Training data, Record keeping, provision of information and transparency, robustness and accuracy, human oversight, testing.

(A.3) References 

(B.) Requested actions

Action 1 Foster coordination and interaction of all stakeholders in providing European requirements for AI, e.g. based on the work of the AI High Level Expert Group, Members States initiatives, OECD etc. Encourage the development of shared visions as a basis for input and requirements to standardisation 

Action 2  SDOs should further increase their coordination efforts around AI standardisation both in Europe and internationally in order to avoid overlap or unnecessary duplication of efforts and aim to the highest quality to ensure a trustworthy and safe deployment of this technology.

Action 3 SDOs should establish coordinated linkages with, and consider European requirements from, initiatives, including policy initiatives, and organisations contributing to the discourse on AI standardisation. This in particular includes the results of the EU HLEG on AI and also the European Parliament, Member States’ initiatives, Council of Europe, and others 

Action 4 SDOs to consider cybersecurity and related aspects of artificial intelligence, to identify gaps and develop the necessary standards on safety, privacy and security of artificial intelligence, to protect against malicious artificial intelligence and to use artificial intelligence to protect against cyber-attacks 

Action 5 Within the AI4EU initiative, identify leading open source activities which complement standardisation work and analyse to what extend they respond to EU requirements. Where useful establish dialogue, liaisons or partnerships with such open source projects.

Action 6 EC/JRC to coordinate with SDOs and other initiatives on developing a standardisation landscape and gap analysis for AI. This work should include recommendations for an action plan. 

Action 7 SDOs to continue their efforts on “ethics” and trust of AI including transparency/explainable AI, privacy etc.  

(C.) Activities and additional information

(C.1) Related standardisation activities

The CEN-CENELEC Focus Group on Artificial Intelligence addresses AI standardisation in Europe, both through a bottom-up approach (similar to ISO/IEC JTC 1 SC 42), and a top-down approach (concentrating on a long-term plan for European standardisation).

Key items include:

  • Mapping of current European and international standardisation initiatives on AI
  • Identifying specific standardisation needs
  • Formulating recommendations on the best way to address AI Ethics in the European context
  • Identifying the CEN and CENELEC TCs that will be impacted by AI
  • Monitoring potential changes in European legislation
  • Liaising with the High-Level Expert Group on AI and identify synergies
  • Acting as the focal point for the CEN and CENELEC TCs
  • Encouraging further European participation in the ISO and IEC TCs

The CEN-CENELEC Focus Group has a published a response to the EC white paper on AI as well as the CEN-CENELEC Roadmap for AI standardisation. Both documents are available here: 


A summary of ETSI work on AI can be found in a dedicated white paper (

The ETSI ISG on Securing Artificial Intelligence (ISG SAI), created in October 2019, focuses on three key areas: using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. ISG SAI collaborates closely with ENISA.

ISG SAI first outputs will centre around six key topics (detailed SAI work programme

  • Problem Statement, that will guide the work of the group
  • Threat Ontology for AI, to align terminology
  • Data Supply Chain, focused on data issues and risks for training AI
  • Mitigation Strategy, with guidance to mitigate the impact of AI threats
  • Security testing of AI
  • Role of hardware in security of AI

ETSI has other ISGs working in the domain of AL/ML (Machine Learning). They are all defining specification of functionalities that will be used in technology.

  • ISG on Experiential Networked Intelligence (ISG ENI) develops standards that use AI mechanisms to assist in the management and orchestration of the network.
  • ISG ZSM is defining the AI/ML enablers in end-to-end service and network management.
  • ISG F5G on Fixed 5G is going to define the application of AI in the evolution towards ‘fibre to everything’ of the fixed network 

Under the areas of the Rolling Plan where new AI standards are needed, ETSI ISG CIM has published specifications for a data interchange format (ETSI CIM GS 009 V1.2.1 NGSI-LD API) and a flexible information model (ETSI CIM GS 006 V1.1.1) which support the exchange of information from e.g. knowledge graphs and can facilitate modelling of the real world, including relationships between entities.

  • ISG on Experiential Networked Intelligence (ISG ENI) develops standards that use AI mechanisms to assist in the management and orchestration of the network. This work will make the deployment of future 5G networks more intelligent and efficient.
  • ISG ENI is defining AI/ML functionality that can be used/reused throughout the network, cloud and end devices.
  • ISG ZSM is defining the AI/ML enablers in end-to-end service and network management.

SEG 10 Ethics in Autonomous and Artificial intelligence Applications,FSP_LANG_ID:22827,25


Governance implications of the use of AI by organisations (a joint project with ISO/IEC JTC 1/SC 40 IT Service Management and IT Governance


SC 42 (Artificial Intelligence) is looking at the international standardisation of the entire AI ecosystem. With 13 current projects under development and 6 working groups, the program of work has been growing rapidly and is expected to continue to do so in 2020. Key items within the work programme include:

  • SC 42/WG 1 – Foundational AI standards. Current projects include:
  • ISO/IEC 22989: Artificial Intelligence Concepts and Terminology
  • ISO/IEC 23053: Framework for Artificial Intelligence Systems Using Machine Learning
  • SC 42/WG 2 – Big data ecosystem. Current projects include:
  • ISO/IEC 20547-1: Information technology -- Big Data reference architecture -- Part 1: Framework and application process
  • ISO/IEC 20547-3: Information technology -- Big Data reference architecture -- Part 3: Reference architecture
  • ISO/IEC 24688: Information technology -- Artificial Intelligence -- Process management framework for Big data analytics
  • SC 42/WG 3 – AI Trustworthiness. Current projects include:
  • ISO/IEC 24027: Information technology -- Artificial Intelligence (AI) -- Bias in AI systems and AI aided decision making
  • ISO/IEC 24028: Information technology -- Artificial Intelligence (AI) -- Overview of trustworthiness in Artificial Intelligence
  • ISO/IEC 24029: Information technology -- Artificial Intelligence (AI) -- Assessment of the robustness of neural networks
  • ISO/IEC 23894 -- Information technology -- Artificial intelligence -- Risk management
  • ISO/IEC 24368: Information technology -- Artificial Intelligence (AI) -- Overview of Ethical and Societal Concerns
  • SC 42/WG 4 – AI Use cases and applications. Current projects include:
  • ISO/IEC 24030: Information technology -- Artificial Intelligence (AI) -- Use cases
  • SC 42/WG 5 – Computational approaches and computational characteristics of AI systems. Current projects include:
  • ISO/IEC 24372: Information technology -- Artificial Intelligence (AI) -- Overview of computational approaches for AI systems
  • SC 42/JWG 1 – Governance implications of AI
  • ISO/IEC 38507 -- Information technology -- Governance of IT -- Governance implications of the use of artificial intelligence by organizations
  • ISO/IEC JTC 1/SC 40 IT Service Management and IT Governance
  • SC 40/WG 1 has commenced work on ISO/IEC 38508 Governance of data — Guidelines for data classification
  • In addition to the above projects, a number of study topics are assigned to the various working groups that also include topics that cross multiple areas such as ethics, societal concerns and lifecycle that are being considered across the work programme.

In addition, SC 42 has developed over 30 active liaisons with ISO and IEC committees, SDOs and industry organizations to encourage collaboration and building out the industry ecosystem around AI and Big Data.


IEEE has a significant amount of activity in both the fields of Autonomous and Intelligent Systems (A/IS) as well as in related vertical industry domains.

In 2019 the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (“The IEEE Global Initiative”) (see: has started a project on the ‘Ethically Aligned Design for Business: A call to action for businesses using AI’.

Stemming from the IEEE Global Initiative, the IEEE P7000TM standards projects address ethical considerations in a broad range of issues regarding autonomous and intelligent systems, including transparency, privacy, algorithmic bias, children’s data, employee data, creating an algorithmic agent for individuals, creating an ethical robotic ontological framework, dealing with robotic nudging, creating a uniform fail-safe standard for A/IS, defining well-being metrics relating to A/IS, assessing news sources to keep them accountable and objective in reporting, creating machine-readable privacy terms for all individuals, and updating facial recognition systems and databases to avoid bias.

AI is not a single technology. Different aspects of ML and other AI techniques are addressed by a bunch of technical standardisation projects including the following:

• IEEE P2807, Framework of Knowledge Graphs • IEEE P2807.1, Standard for Technical Requirements and Evaluating Knowledge Graphs • IEEE P2830, Standard for Technical Framework and Requirements of Shared Machine Learning • IEEE P2841, Framework and Process for Deep Learning Evaluation • IEEE P3652.1, Guide for Architectural Framework and Application of Federated Machine Learning

 More information is available at


The IETF Autonomic Networking Integrated Model and Approach Working Group will develop a system of autonomic functions that carry out the intentions of the network operator without the need for detailed low- level management of individual devices. This will be done by providing a secure closed-loop interaction mechanism whereby network elements cooperate directly to satisfy management intent. The working group will develop a control paradigm where network processes coordinate their decisions and automatically translate them into local actions, based on various sources of information including operator-supplied configuration information or from the existing protocols, such as routing protocol, etc.

Autonomic networking refers to the self-managing characteristics (configuration, protection, healing, and optimization) of distributed network elements, adapting to unpredictable changes while hiding intrinsic complexity from operators and users. Autonomic Networking, which often involves closed-loop control, is applicable to the complete network (functions) lifecycle (e.g. installation, commissioning, operating, etc). An autonomic function that works in a distributed way across various network elements is a candidate for protocol design. Such functions should allow central guidance and reporting, and co-existence with non-autonomic methods of management. The general objective of this working group is to enable the progressive introduction of autonomic functions into operational networks, as well as reusable autonomic network infrastructure, in order to reduce operating expenses.


AI for Good Global Summit is the leading United Nations platform for global and inclusive dialogue on AI. The Summit is hosted each year in Geneva by the ITU in partnership with UN Sister agencies, XPRIZE Foundation and ACM. More info:

ITU-T Focus Group on Machine Learning for future networks including 5G (FG-ML5G; operated in November 2017 – July 2020) delivered 11 outputs to Study Group 13. These documents cover methods for evaluating intelligence level of future networks, data handling to enable machine learning in future networks, use cases of ML in future networks and unified architecture for ML in 5G. The latter was improved and approved as Recommendation ITU T Y.3172 “Architectural framework for machine learning in future networks including IMT-2020” was approved in 2019.  This work was complemented with approval of Supplement 55 to Y.3170-series “Machine learning in future networks including IMT-2020: use cases” and Recommendations ITU-T Y.3173 “Framework for evaluating intelligence levels of future networks including IMT-2020”, Y.3174 “Framework for data handling to enable machine learning in future networks including IMT-2020”, Y.3176 “Machine learning marketplace integration in future networks including IMT-2020”.More info:

AI Machine Learning in 5G Challenge project was launched by ITU/TSB in March 2020. Enterprises are invited to be part of it. Results will be announced in November 2020. More info: SG13 approved Y.3170 “Requirements for machine learning-based quality of service assurance for the IMT-2020 network”, Y.3175 “Functional architecture of machine learning-based quality of service assurance for the IMT-2020 network” and Y.3531 “Cloud computing - Functional requirements for machine learning as a service” and is working on the  Y.ML-IMT2020-NA-RAFR (Architecture framework of AI-based network automation for resource adaptation and failure recovery in future networks including IMT-2020), Y.ML-IMT2020-serv-prov (Architecture framework of user-oriented network service provisioning for future networks including IMT-2020), ML for big data driven networking, ML as a tool to better shape traffic, man-like networking and machine learning based QoS assurance for 5G, network slicing with AI-assisted analysis in 5G, AI integrated cross-domain network architecture for future networks including IMT-2020, AI-based network automation, framework of user-oriented network service provisioning as well as the AI standards roadmap, which has a matrix of different document types for vertical vs the related technologies for supporting artificial intelligence.

Supplement 67 to Y.3000-series of ITU-T Recommendations “Representative use cases and key network requirements for Network 2030” speaks about Intelligent operation network use case for networks in operation around 2030. AI standards roadmap is available here:

Other activities related to standardisation
The European AI Alliance

The High-Level Group on Artificial Intelligence

AI on Demand Platform


R&D&I projects funded within topics ICT-26 from the H2020-ICT-Work Programme 2018-20 can produce relevant input for standardisation.

(C.2) additional information
European AI Alliance

European AI Alliance is a forum set up by the European Commission engaged in a broad and open discussion of all aspects of Artificial Intelligence development and its impacts. Given the scale of the challenge associated with AI, the full mobilisation of a diverse set of participants, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies is essential. The European AI Alliance will form a broad multi-stakeholder platform, which will complement and support the work of the AI High-Level Group in particular in preparing draft AI ethics guidelines, and ensuring the competitiveness of the European Region in the burgeoning field of Artificial Intelligence. The Alliance is open to all stakeholders. It is managed by a secretariat, and it is already open for registration.

High-Level Expert Group on Artificial Intelligence (AI HLG)

On 14th June 2018, the Commission appointed 52 world-class experts to a new High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society, as well as industry. Moreover, the AI HLG will serve as the steering group for the European AI Alliance’s work, interact with other initiatives, help stimulate a multi-stakeholder dialogue, gather participants’ views and reflect them in its analysis and reports.

In particular, the group will be tasked to:

  • Advise the Commission on next steps addressing AI-related mid- to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy.
  • Propose to the Commission draft AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination.
  • Support the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group’s and the Commission’s work.

In September 2019, the Committee of Ministers of the Council of Europe set up an Ad Hoc Committee on Artificial Intelligence – CAHAI. The Committee will examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law. The committee, which brings together representatives from the Member States, will have an exchange of views with leading experts on the impact of AI applications on individuals and society, the existing soft law instruments specifically dealing with AI and the existing legally binding international frameworks applicable to AI.

AI on Demand Platform

The European Commission has launched a call for proposals to fund a large €20 million project on Artificial Intelligence (AI) under the framework programme on R&D Horizon 2020. It aims to mobilise the AI community in Europe in order to combine efforts, to develop synergies among all the existing initiatives and to optimise Europe’s potential. The call was closed on 17th April 2018, and the received proposals have been evaluated. The awarded project started on 1st January 2019.

Under the next multi-annual budget, the Commission plans to increase its investment in AI further, mainly through two programmes: the research and innovation framework programme (Horizon Europe), and a new programme called Digital Europe.

UNESCO International research centre on Artificial Intelligence (IRCAI)

UNESCO has approved the establishment of IRCAI, which will be seated in Ljublijana (Slovenia). IRCAI aims to provide an open and transparent environment for AI research and debates on AI, providing expert support to stakeholders around the globe in drafting guidelines and action plans for AI. It will bring together various stakeholders with a variety of know-how from around the world to address global challenges and support UNESCO in carrying out its studies and take part in major international AI projects. The centre will advise governments, organisations, legal persons and the public on systemic and strategic solutions in introducing AI in various fields.

AI studies

In addition to the previous initiatives, the Commission is planning to conduct some technical studies about AI. Among them, there will be one specifically targeted to identify safety standardisation needs.

Standard sharing with other domains

AI is a vast scientific and technological domain that overlaps with other domains also discussed in this rolling plan, e.g. big data, e-health, robotics and autonomous systems and so forth. Many of the standardisation activities of these domains will be beneficial for AI and the other way around. For more details, please refer to section “C.1-Related standardisation Activities”.