Skip to main content

ARTIFICIAL INTELLIGENCE

(A.) Policy and legislation

(A.1) Policy objectives

Although there is no generally accepted definition of Artificial intelligence (AI), in 2019, the Organisation for Economic Co-operation and Development (OECD) adopted the following definition of an AI system: ‘An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’

AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications) or a combination of both.

We are using AI on a daily basis, e.g. to translate languages, generate subtitles in videos or to block email spam. Beyond making our lives easier, AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats. Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry.

The term AI was coined in 1956. Since then, the research on AI included a large variety of computing techniques and spread over many different application areas. Historically the development of AI has alternated some periods of fast development, called ‘AI springs’, with other periods of reduced funding and interest, called AI winters. Currently, AI is experiencing another spring, which is motivated by three main driving factors: the huge amount of available data generated by the world-wide-web and sensor networks, the affordability of high-performance processing power, even in low-cost personal devices, and the progress in algorithms and computing techniques. Another characteristic of the present AI wave is that it goes far beyond the research community and targets product innovations and business-oriented services with high commercial potential, which assures its sustainability.

The way of approaching AI will shape the digital future. In order to enable European companies and citizens to reap the benefits of AI, we need a solid European strategy and framework.

A new EU strategy on AI was published on 25thApril 2018, in the Commission Communication on Artificial Intelligence for Europe. One of the main elements of the strategy is an ambitious proposal to achieve a major boost in investment in AI-related research and innovation and in facilitating and accelerating the adoption of AI across the economy.

The target was to reach a total of €20 billion in AI-related investment, including both the public and the private sector, for the three years up to 2020. For the decade after, the set goal is to reach the same amount as an annual average. This is of crucial importance if we want to ensure that the EU can compete on a global scale with regard to AI development and uptake. .

In December 2018, the Commission presented a Coordinated Plan on AI with Member States to foster the development and use of AI. It represents a joint commitment that reflects the understanding that, by working together, Europe can maximise its potential to compete globally. The main aims set out in the plan are: to maximise the impact of investments at EU and national levels, to encourage synergies and cooperation across the EU, including and to foster the exchange of best practices.

In February 2020 the Commission issued a White Paper on AI. The overall EU strategy proposed in the White Paper on AI proposes an ecosystem of excellence and trust for AI. The concept of anecosystem of excellencein Europe refers to measures which support research, foster collaboration between Member States and increase investment into AI development and deployment. Theecosystem of trustis based on EU values and fundamental rights, and foresees robust requirements that would give citizens the confidence to embrace AI-based solutions, while encouraging businesses to develop them. The European approach for AI ‘aims to promote Europe’s innovation capacity in the area of AI, while supporting the development and uptake of ethical and trustworthy AI across the EU economy. AI should work for people and be a force for good in society.

Following a public consultation, the objectives of the White Paper translated into a key AI package adopted by the Commission on 21 April 2021. This package includes proposal for the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally and 2021 review of the Coordinated Plan.

The proposal for a legal framework is aimed at laying down rules to ensure that AI systems used in the EU are safe and do not compromise fundamental rights. The key elements of the proposal are:

  • the definition of AI, which builds on the one elaborated by the OECD
  • rules for the definition of high-risk AI systems
  • compliance and enforcement mechanisms for the high-risk AI use cases
  • rules on the use of remote biometric identification
  • mandatory obligations for providers and users of high-risk AI systems
  • certain notification obligations for systems posing certain specific transparency risks

The proposal is complementary and applies in conjunction with all existing EU acquis on data protection and fundamental rights.

The 2021 Review of the Coordinated Plan on AI puts forward a concrete set of joint actions for the European Commission and Member States on how to create EU global leadership on trustworthy AI. The proposed key actions reflect the vision that to succeed, the European Commission together with Member States and private actors need to:

  • accelerate investments in AI technologies to drive resilient economic and social recovery facilitated by the uptake of new digital solutions;
  • act on AI strategies and programmes by implementing them fully and in a timely manner to ensure that the EU reaps the full benefits of first-mover adopter advantages; and
  • align AI policy to remove fragmentation and address global challenges

Standardisation activities are one of the action areas identified in the 2021 Coordinated Plan as an area for joint action between European Commission and Member States.

(A.2) EC perspectiveand progress report

AI is a field that has had little standardisation activities in the past. However, the big increase in interest and activities around AI in the latest years brings together a need for the development of a coherent set of AI standards. In response to this, ISO and IEChas created a standardisation committee on AI, namely ISO/IEC JTC 1/SC 42, which is most active in the field of AI and big data. A CEN-CENELEC Focus Group on Artificial Intelligence (AI) was also establishedin December 2018 androadmap for AI standardisationwas published.Subsequently, CEN-CENELEC has created a Joint technical committee, namely CEN-CENELEC JTC 21, which has started its activities on June 1 2021. The professional association IEEE is also very active in investigating and proposing new standards for AI, particularly in the field of ethics.

In addition, ETSI is active in the use of AI in ICT and a summary of current work on AI can be found in a dedicated white paper. In October 2019 ETSI created the ISG on Securing Artificial Intelligence (ISG SAI) focusing on three key areas: using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. ISG SAI collaborates closely with ENISA.

The proposal for the new AI Regulation is set as a New-Legislative Framework-type legislation. Hence, the role of harmonised standards will be key to provide the detailed technical specifications through which economic operators can achieve compliance with the relevant legal requirements. Harmonised standards are thus a key tool for the implementation of the legislation and contribute to the specific objective to ensure that that AI systems are safe and trustworthy.

As a consequence of this, the European Commission intends to intensify the elaboration of standards in the area of AI to ensure that standards are available to operators on time ahead of the application date of the future AI framework. In this respect, the Commission intends to issue a first standardisation request in accordance with Regulation (EU) 1025/2012 in the course of 2022. The most likely areas where new AI standards will be required are the ones which are addressed by the future requirements of the AI framework. They will be primarily: Data governance and data quality, Record keeping, provision of information and transparency, trustworthiness, robustness, accuracy and cybersecurity, human oversight, risk management and testing, conformity assessment, quality management system, lifecycle monitoring, users’conduct.

(A.3) References

B.) Requested actions

Action 1 SDOs should establish coordinated linkages with, and adequately consider European requirements or expectations from initiatives, including policy initiatives, and organisations contributing to the discourse on AI standardisation. This in particular includes the contents of the EU proposal for an AI Regulation and of the standardisation request on AI issued by the European Commission in 2022 as well as the orientations set in the 2021 review of the Coordinated Plan

Action 2 SDOs should further increase their coordination efforts around AI standardisation both in Europe and internationally in order to avoid overlap or unnecessary duplication of effortsand aim to the highest quality to ensure a trustworthy and safe deployment of this technology.

Action 3 ESOs should coordinate with the Commission and appropriately direct their activities to ensure that the objectives set in the standardisation request on AI issued in 2022 are adequately and timely fulfilled

Action 4 Taking into account the cross-sectorial aspects of the proposed AI Regulation and the interactions between the AI Regulation and existing or future sectorial safety legislation (for example the proposed new EU Regulation on machinery products), ESOs shall devote specific attention to the elaboration of standards on the methodology of risk assessment of cyber-physicalproducts powered by AI and on the testing framework.

Action 5 SDOs should appropriately consider cybersecurity and relatedaspects of artificial intelligence, to identify gaps anddevelop the necessary standards on safety, privacy and security of artificial intelligence, to protect against malicious artificial intelligence and to use artificial intelligence to protect against cyber-attacks

Action 6 EC/JRC to coordinate with SDOs and other initiatives on developing a standardisation landscape and gap analysis for AI. This work should include recommendations for an action plan.

Action 7 Stakeholders in open source to identify relevant open source projects in the field of AI, e.g. providing tools for testing, benchmarking etc.

(C.) Activities and additional information

(C.1) Related standardisation activities
CEN-CENELEC

TheCEN-CENELEC JTC 21 on Artificial Intelligenceaddresses AI standardisation in Europe, both through a bottom-up approach (similar to ISO/IEC JTC 1 SC 42), and a top-down approach concentrating on a long-term plan for European standardisation and future AI regulation.

The JTC shall produce standardisation deliverables in the field of Artificial Intelligence (AI) and related use of data, as well as provide guidance to other technical committees concerned with Artificial Intelligence. The JTC shall also consider the adoption of relevant international standards and standards from other relevant organisations, like ISO/IEC JTC 1 and its subcommittees, such as SC 42 Artificial intelligence. Finally, the JTC shall produce standardisation deliverables to address European market and societal needs and to underpin primarily EU legislation, policies, principles, and values.

The JTC 21 has initiated the following activities:

  • Mapping of current European and international standardisation initiatives on AI
  • Identifying specific standardisation needs
  • Monitoring potential changes in European legislation
  • Liaising with relevant TCs and organizations in order to identify synergies and, if possible, initiate joint work
  • Acting as the focal point for the CEN and CENELEC TCs
  • Encouraging further European participation in the ISO and IEC TCs

Prior to the establishment of JTC 21 the CEN-CENELEC Focus Group on AI explored the possibilities for a dedicated CEN-CENELEC TC on AI. The Focus Group published two documents: a response to the EC white paper on AI as well as the CEN-CENELEC Roadmap for AI standardisation. Both documents are availablehere.
After it completed its tasks the Focus Group on AI was disbanded and documents and assets were transferred to the CEN-CENELEC JTC 21.

ETSI

A summary of ETSI work on AI can be found in a dedicated white paper .

The ETSI ISG on Experiential Networked Intelligence (ENI ISG) is defining a Cognitive Network Management architecture. This is using Artificial Intelligence (AI) techniques and context-aware policies to adjust offered services based on changes in user needs, environmental conditions and business goals. created in March 2017.

ISG ENI outputs centre around network optimization & Cognitive Network Management architecture highlighted inhttps://eniwiki.etsi.org/index.php?title=ISG_ENI_Activitiesthis is described further in the whitepaper (https://www.etsi.org/images/files/ETSIWhitePapers/etsi-wp44_ENI_Vision.pdf)

The ETSI ISG on Securing Artificial Intelligence(ISG SAI), created in October 2019, focuses on three key areas: using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack. ISG SAI collaborates closely with ENISA.

ISG SAI first outputs have centred aroundsix key topics and the following have been published or are in development to date in part in response to Action 5 above:

  • Problem Statement, Published in December 2020
  • Mitigation Strategy, Published in March 2021
  • Data Supply Chain, Published in August 2021
  • Threat Ontology for AI, to align terminology
  • Security testing of AI
  • Role of hardware in security of AI
  • Explainability and transparency of AI processing
  • Privacy aspects of AI/ML systems

Further details of the current SAI work programme can be found at:https://portal.etsi.org/Portal_WI/form1.asp?tbid=877&SubTB=877

ETSI has other ISGs working in the domain of AL/ML (Machine Learning). They are all defining specifications of functionalities that will be used in technology.

  • ISG on Experiential Networked Intelligence (ISG ENI) develops standards that use AI mechanisms to assist in the management and orchestration of the network.
  • ISG ZSM is defining the AI/ML enablers in end-to-end service and network management.
  • ISG F5G on Fixed 5G is going to define the application of AI in the evolution towards ‘fibre to everything’ of the fixed network
  • ISG NFV on network functions virtualisation studies the application of AI/ML techniques to improve automation capabilities in NFV management and orchestration.

Under the areas of the Rolling Plan where new AI standards are needed, ETSI ISG CIM has published specifications for a data interchange format (ETSI CIM GS 009 V1.2.1 NGSI-LD API) and a flexible information model (ETSI CIM GS 006 V1.1.1) which support the exchange of information from e.g. knowledge graphs and can facilitate modelling of the real world, including relationships between entities.

  • ISG on Experiential Networked Intelligence (ISG ENI) develops standards that use AI mechanisms to assist in the management and orchestration of the network. This work will make the deployment of future 5G networks more intelligent and efficient.
  • ISG ENI is defining AI/ML functionality that can be used/reused throughout the network, cloud and end devices.
  • ISG ZSM is defining the AI/ML enablers in end-to-end service and network management.
  • ISG NFV has published a report on enabling autonomous management in NFV-MANO (ETSI GR NFV-IFA 041), which provides recommendations on standardisation work to be carried out on Management Data Analytics (MDA) assisted management in NFV-MANO.
IEC

SEG 10 Ethics in Autonomous and Artificial intelligence Applications

https://www.iec.ch/dyn/www/f?p=103:186:0::::FSP_ORG_ID,FSP_LANG_ID:22827,25

ISO/IEC JTC 1

SC 42 (Artificial Intelligence) is looking at the international standardisation of the entire AI ecosystem. With 8 published standards and 22 current projectsunder development and 6 working groups, the program of work has been growing rapidly and continues to grow in 2021. Key items within the work programme include:

The following is the list of published SC 42 standards:

  • ISO/IEC 20546:2019 Information technology — Big data — Overview and vocabulary
  • ISO/IEC TR 20547-1:2020 Information technology — Big data reference architecture — Part 1: Framework and application process
  • ISO/IEC TR 20547-2:2018 Information technology — Big data reference architecture — Part 2: Use cases and derived requirements
  • ISO/IEC 20547-3:2020 Information technology — Big data reference architecture — Part 3: Reference architecture
  • ISO/IEC TR 20547-5:2018 Information technology — Big data reference architecture — Part 5: Standards roadmap
  • ISO/IEC DTR 24027 Information technology -- Artificial Intelligence (AI) -- Bias in AI systems and AI aided decision making
  • ISO/IEC TR 24028:2020 Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligence
  • ISO/IEC TR 24029-1:2021 Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview
  • ISO/IEC TR 24030:2021 Information technology — Artificial intelligence (AI) — Use cases
  • ISO/IEC 24372 Information technology -- Artificial Intelligence (AI) -- Overview of computational approaches for AI systems

The following is the list of SC 42 projects under development:

WG 1 – Foundational AI standards

  • ISO/IEC DIS 22989 Artificial Intelligence Concepts and Terminology
  • ISO/IEC 23053 Framework for Artificial Intelligence Systems Using Machine Learning
  • ISO/IEC 42001 Artificial Intelligence - Management System

WG 2 – Big data ecosystem

  • ISO/IEC 24688 Information technology -- Artificial Intelligence -- Process management framework for Big data analytics
  • ISO/IEC 5259-1 Data quality for analytics and ML - Part 1: Overview, terminology, and examples
  • ISO/IEC 5259-2 Data quality for analytics and ML - Part 1: Data quality measures
  • ISO/IEC 5259-3 Data quality for analytics and ML - Part 1: Data quality management requirements and guidelines
  • ISO/IEC 5259-4 Data quality for analytics and ML - Part 1: Data quality process framework

WG 3 - AI Trustworthiness

  • ISO/IEC 23894 Information technology -- Artificial intelligence -- Risk management
  • ISO/IEC 24368 Information technology -- Artificial Intelligence (AI) -- Overview of Ethical and Societal Concerns
  • ISO/IEC 25059 Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality Model for AI systems
  • ISO/IEC 8200 Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems

WG 4 – AI Use cases and applications

  • ISO/IEC 24030 Information technology -- Artificial Intelligence (AI) -- Use cases (2nd ed.)
  • ISO/IEC 5338 Information technology -- Artificial Intelligence (AI) – AI system life cycle processes
  • ISO/IEC 5339 Information technology -- Artificial Intelligence (AI) – Guidelines for AI applications

WG 5 – Computational approaches and computational characteristics of AI systems

  • ISO/IEC 4213 Information technology — Artificial Intelligence — Assessment of machine learning classification performance
  • ISO/IEC 5392 Information technology — Artificial intelligence — Reference architecture of knowledge engineering

ISO/IEC JTC 1/SC 40 & 42 JWG 1

  • ISO/IEC 38507 -- Information technology -- Governance of IT -- Governance implications of the use of artificial intelligence by organizations

In addition to the above projects under development, a number of ad hoc groups in the SC 42 WGs are studying topics that cross multiple areas such as:

  1. machine learning computing devices
  2. ontologies, knowledge engineering, and representation
  3. data quality governance framework
  4. testing of AI systems
  5. AI standards landscape and roadmap
  6. coordination with JTC 1 SC 27 on AI security and privacy proposed standards
  7. data quality visualization

In addition, SC 42 has developed over 30 active liaisons with ISO and IEC committees, SDOs and industry organizations to encourage collaboration and building out the industry ecosystem around AI and Big Data.

ISO/IEC JTC 1 SC 7 - Software and systems engineering

ISO/IEC 25012:2008 Software engineering — Software product Quality Requirements and Evaluation (SQuaRE) — Data quality model

ISO/IEC TR 29119-11:2020 Software and systems engineering — Software testing — Part 11: Guidelines on the testing of AI-based systems.

IEEE

IEEE has a significant amount of activity in both the fields of Autonomous and Intelligent Systems (A/IS) as well as in related vertical industry domains.

In 2016 the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (“The IEEE Global Initiative”) started a project called ‘Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Autonomous and Intelligent Systems. The EAD work was also used by Future of Life Institute (FLI) and Organisation for Economic Co-operation and Development (OECD) to create their AI principles, as well as some companies.

The IEEE 7000 Series (see:https://ethicsinaction.ieee.org/p7000/) address ethical considerations in a broad range of issues regarding autonomous and intelligent systems, including transparency, privacy, algorithmic bias, children’s data, employee data, creating an algorithmic agent for individuals, creating an ethical robotic ontological framework, dealing with robotic nudging, creating a uniform fail-safe standard for A/IS, defining well-being metrics relating to A/IS, assessing news sources to keep them accountable and objective in reporting, creating machine-readable privacy terms for all individuals, and considering the ethical implications of emulated empathy in AI System.

In 2020, IEEE released the first of the 7000 series, IEEE 7010-2020, IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being

In 2021, IEEE released IEEE 7000-2021, IEEE Standard Model Process for Addressing Ethical Concerns During System Design

Along with EAD and these standards projects, IEEE has created other resources related to AI:

  • EAD for Business: A Call to Action for Businesses Using AI
  • The IEEE Trusted Data and Artificial Intelligence Systems (AIS) Playbook for Financial Services· Addressing Ethical Dilemmas in AI: Listening to Engineers (see:
  • Measuring What Matters in the Era of Global Warming and the Age of Algorithmic Promises
  • The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)

Otheraspects of ML and other AI techniques are addressed by other standardisation projects including:

  • IEEE P2807, Framework of Knowledge Graphs
  • IEEE P2807.1, Standard for Technical Requirements and Evaluating Knowledge Graphs
  • IEEE P2830, Standard for Technical Framework and Requirements of Shared Machine Learning
  • IEEE P2841, Framework and Process for Deep Learning Evaluation
  • IEEE P2863Recommended Practice for Organizational Governance of Artificial Intelligence
  • IEEE 3652.1-2020, Guide for Architectural Framework and Application of Federated Machine Learning

More information is available athttps://ieeesa.io/rp-aisandhttps://standards.ieee.org/initiatives/artificial-intelligence-systems/index.html.

IETF
  • The IETF Autonomic Networking Integrated Model and Approach Working Group will develop a system of autonomic functions that carry out the intentions of the network operator without the need for detailed low- level management of individual devices. This will be done by providing a secure closed-loop interaction mechanism whereby network elements cooperate directly to satisfy management intent. The working group will develop a control paradigm where network processes coordinate their decisions and automatically translate them into local actions, based on various sources of information including operator-supplied configuration information or from the existing protocols, such as routing protocol, etc.

Autonomic networking refers to the self-managing characteristics (configuration, protection, healing, and optimization) of distributed network elements, adapting to unpredictable changes while hiding intrinsic complexity from operators and users. Autonomic Networking, which often involves closed-loop control, is applicable to the complete network (functions) lifecycle (e.g. installation, commissioning, operating, etc). An autonomic function that works in a distributed way across various network elements is a candidate for protocol design. Such functions should allow central guidance and reporting, and co-existence with non-autonomic methods of management. The general objective of this working group is to enable the progressive introduction of autonomic functions into operational networks, as well as reusable autonomic network infrastructure, in order to reduce operating expenses.

https://trac.ietf.org/trac/iab/wiki/Multi-Stake-Holder-Platform#AI

ITU

AI for Good Global Summit is the leading United Nations platform for global and inclusive dialogue on AI. The Summit is hosted each year in Geneva by the ITU in partnership with UN Sister agencies, XPRIZE Foundation and ACM. More info: https://aiforgood.itu.int

ITU-T Study Group 13 approved various ITU-T Recommendations covering AI-based networks as well as machine learning in future networks and IMT-2020, including use cases, architectural frameworks, quality of service assurance, service provisioning, data handling, learning models, network automation for resource and fault management, marketplace integration, cloud computing, Quantum key distribution networks (e.g. Recommendations ITU T Y.3170, Y.3172; Y.3173, Y.3174, Y.3175, Y.3176, Y.3177, Y.3178, Y.3179, Y.3531, Sup 55 to Y.3170-series and Sup 70 to Y.3800-series. More info: https://www.itu.int/en/ITU-T/focusgroups/ml5g/Pages

SG13 continues development of Recommendations on the above topics as well as ML for big data driven networking, ML as a tool to better shape traffic, man-like networking. Also, in the framework of 5G, SG13 studies ML and AI to enhance QoS assurance, network slicing, operation management of cloud services, integrated cross-domain network architecture, network automation, framework of user-oriented network service provisioning. It also maintains the AI standards roadmap, which has a matrix of different document types per vertical versus the related technologies for supporting AI. For more info contact tsbsg13@itu.int.

ITU has been at the forefront to explore how to best apply AI/ML in future networks including 5G networks. To advance the use of AI/ML in the telco industry, ITU launched the AI/ML in 5G Challenge in March 2020. The Challenge rallies like-minded students and professionals from around the globe to study the practical application of AI/ML in emerging and future networks. It also enhances the community driving standardisation work for AI/ML, creating new opportunities for industry and academia to influence international standardisation. The Challenge solutions can be accessed in several repositories on the Challenge GitHub: https://github.com/ITU-AI-ML-in-5G-Challenge.

More info: https://aiforgood.itu.int/about/aiml-in-5g-challenge/

AI for Road Safety: The ITU, together with the UN Secretary-General’s Special Envoy for Road Safety and the Envoy on Technology, agreed to launch a new initiative on AI for Road Safety, which is in line with the UN General Assembly Resolution UN A/RES/74/299) on Improving global Road Safety, which highlights the role of innovative automotive and digital technologies. AI for Road Safety aims to leverage the use of AI for enhancing the safe system approach to road safety.

The new initiative will also support achieving the UN SDG target 3.6 to halve by 2030 the number of global deaths and injuries from road traffic accidents, and the SDG Goal 11.2 to provide access to safe, affordable, accessible and sustainable transport systems for all by 2030. See:

https://aiforgood.itu.int/event/ai-for-road-safety/

https://aiforgood.itu.int/about/ai-ml-pre-standardisation/ai4roadsafety/

ITU-T SG20 approved Recommendation ITU-T Y.4470 “Reference architecture of artificial intelligence service exposure for smart sustainable cities” that introduces AI service exposure (AISE) for smart sustainable cities (SSC), and provides the common characteristics and high-level requirements, reference architecture and relevant common capabilities of AISE, and agreed Supplement ITU-T Y.Suppl.63 “Unlocking Internet of things with artificial intelligence” that examines how artificial intelligence could step in to bolster the intent of urban stakeholders to deploy IoT technologies and eventually transition to smart cities.

More info: https://itu.int/go/tsg20

ITU-T Study Group 5 approved Recommendation ITU-T L.1305 “Data centre infrastructure management system based on big data and artificial intelligence technology”. This standard contains technical specifications of a data centre infrastructure management (DCIM) system, covering: principles, management objects, management system schemes, data collection function requirements, operational function requirements, energy saving management, capacity management for information and communication technology (ICT) and facilities, other operational function requirements and intelligent controlling on systems to maximize green energy use. Other aspects such as maintenance function requirements, early alarm and protection based on big data analysis and intelligent controlling on systems to decrease the cost for maintenance are also considered.

More info: https://itu.int/go/tsg5

The Focus Group on Environmental Efficiency for Artificial Intelligence and other emerging technologies (FG-AI4EE) for identifies the standardisation needs to develop a sustainable approach to AI and other emerging technologies. The FG-AI4EE is developing technical reports and specifications on requirements, assessment and measurement and implementation guidelines of AI and other emerging technologies.

More info: https://itu.int/go/fgai4ee

The ITU-T Focus Group on AI for Autonomous and Assisted Driving (FG-AI4AD) aims to develop a definition of minimal performance threshold for AI systems that are responsible for the driving tasks in vehicles, so that an automated vehicle always operates safely on the road, at least as a competent and careful human driver.

More info: https://itu.int/go/fgai4ad

ITU-T Focus Group on Artificial Intelligence (FG-AI4H), established in partnership with ITU and WHO, is working towards to establishing a standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions. https://www.itu.int/en/ITU-T/focusgroups/ai4h/

The Focus Group on Artificial Intelligence for Natural Disaster Management (FG-AI4NDM) aims to underscore best practices for leveraging AI for supporting data collection modelling across spatiotemporal scales, and providing effective communications in the advent of disasters of natural origin. The activities of this Focus Group are conducted in collaboration with the World Meteorological Organization (WMO) and United Nations Environment Programme.

More info: https://itu.int/go/fgai4ndm

ITU-R

AI in Radiocommunication Standards: ITU Radiocommunication (ITU-R) Study Groups and forthcoming reports examine the use of AI in radiocommunications:

  • ITU-R Study Group 1 covers all aspects of spectrum management, including spectrum monitoring. Question 241/1 looks at “Methodologies for assessing or predicting spectrum availability”.
  • ITU-R Study Group 6, dedicated to broadcasting services, is also studying AI and ML applications:
  • Question ITU-R 144/6, “Use of AI for broadcasting”, considers the impact of AI technologies and how can they be deployed to increase efficiency in programmeproduction, quality evaluation,programmeassembly and broadcast emission.
  • Recommendation ITU-R BS.1387: “Method for objective measurements of perceived audio quality”. The first application of neural networks, which is now called AI (artificial intelligence), in the field of broadcasting.
  • Report ITU-R BT.2447, “AI systems forprogrammeproduction and exchange”, discusses current applications and near-term initiatives. This Report is being revised regularly to reflect the latest progresses on AI for the applications in broadcasting industry chains.
OASIS

RECITE (REasoning for Conversation and Information Technology Exchange) is a new OASIS Open Project dedicated to developing a standard for dialogue modelling in conversational agents. It aims to establish interoperability between software vendors.

oneM2M

oneM2M provides a standardized IoT data source for AI/ML applications. Furthermore, the oneM2M work item on “System enhancements to support AI capabilities” (WI-0105) aims to enable oneM2M to utilize Artificial Intelligence models and data management for AI services. All oneM2M specifications are publicly accessible at Specifications (onem2m.org). See also the section on IoT in the Rolling plan.

(C.2) Other activities related to standardisation
The European AI Alliance

https://ec.europa.eu/digital-single-market/en/european-ai-alliance

The High-Level Group on Artificial Intelligence

https://ec.europa.eu/digital-single-market/high-level-group-artificial-intelligence

AI on Demand Platform

http://ec.europa.eu/research/participants/portal/desktop/en/opportunities/h2020/topics/ict-26-2018-2020.html

H2020

R&D&I projects funded within topics ICT-26 from the H2020-ICT-Work Programme 2018-20 can produce relevant input for standardisation.

http://ec.europa.eu/research/participants/portal/desktop/en/opportunities/h2020/topics/ict-26-2018-2020.html

(C.3) Additional information
European AI Alliance

European AI Alliance is a forum set up by the European Commission engaged in a broad and open discussion of all aspects of Artificial Intelligence development and its impacts. Given the scale of the challenge associated with AI, the full mobilisation of a diverse set of participants, including businesses, consumer organisations, trade unions, and other representatives of civil society bodies is essential. The European AI Alliance will form a broad multi-stakeholder platform, which will complement and support the work of the AI High-Level Group in particular in preparing draft AI ethics guidelines, and ensuring the competitiveness of the European Region in the burgeoning field of Artificial Intelligence. The Alliance is open to all stakeholders. It is managed by a secretariat, and it is already open for registration.

High-Level Expert Group on Artificial Intelligence (AI HLG)

The group has now concluded its work by publishing the following four deliverables:

Deliverable 1: Ethics Guidelines for Trustworthy AI
The document puts forward a human-centric approach on AI and lists 7 key requirements that AI systems should meet in order to be trustworthy.

Deliverable 2: Policy and Investment Recommendations for Trustworthy AI
Building on its first deliverable, the HLEG put forward 33 recommendations to guide trustworthy AI towards sustainability, growth, competitiveness, and inclusion. At the same time, the recommendations will empower, benefit, and protect European citizens.

Deliverable 3: Assessment List for Trustworthy AI (ALTAI)
A practical tool that translates the Ethics Guidelines into an accessible and dynamic self-assessment checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements. This list is available as a prototype web-based tool and in PDF format.

Deliverable 4: Sectoral Considerations on the Policy and Investment Recommendations
The document explores the possible implementation of the HLEG recommendations, previously published, in three specific areas of application: Public Sector, Healthcare and Manufacturing & Internet of Things.

AI Watch

Thethe Commission’s AI Watch prepared a report titled “National Strategies on Artificial Intelligence: A European Perspective”that was presented during a joint webinar with the OECD, which took place today. It monitors the AI national strategies of all EU countries, as well as Norway and Switzerland, and this year’s update focuses on areas of cooperation for:

  • strengthening AI education and skills;
  • supporting research and innovation to drive AI developments into successful products and services, improving collaboration and networking;
  • creating a regulatory framework to address the ethics and trustworthiness of AI systems;
  • establishing a cutting-edge data ecosystem and ICT infrastructure.
CAHAI

In September 2019, the Committee of Ministers of the Council of Europe set up an Ad Hoc Committee on Artificial Intelligence – CAHAI. The Committee will examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law. The committee, which brings together representatives from the Member States, will have an exchange of views with leading experts on the impact of AI applications on individuals and society, the existing soft law instruments specifically dealing with AI and the existing legally binding international frameworks applicable to AI.

AI on Demand Platform

The European Commission has launched a call for proposals to fund a large €20 million project on Artificial Intelligence (AI) under the framework programme on R&D Horizon 2020. It aims to mobilise the AI community in Europe in order to combine efforts, to develop synergies among all the existing initiatives and to optimise Europe’s potential. The call was closed on 17th April 2018, and the received proposals have been evaluated. The awarded project started on 1st January 2019.

Under the next multi-annual budget, the Commission plans to increase its investment in AI further, mainly through two programmes: the research and innovation framework programme (Horizon Europe), and a new programme called Digital Europe.

UNESCO International research centre on Artificial Intelligence (IRCAI)

UNESCO has approved the establishment of IRCAI, which will be seated in Ljublijana (Slovenia). IRCAI aims to provide an open and transparent environment for AI research and debates on AI, providing expert support to stakeholders around the globe in drafting guidelines and action plans for AI. It will bring together various stakeholders with a variety of know-how from around the world to address global challenges and support UNESCO in carrying out its studies and take part in major international AI projects.The centre will advise governments, organisations, legal persons and the public on systemic and strategic solutions in introducing AI in various fields.

AI studies

In addition to the previous initiatives, the Commission is planning to conduct some technical studies about AI. Among them, there will be one specifically targeted to identify safety standardisation needs.

Standard sharing with other domains

AI is a vast scientific and technological domain that overlaps with other domains also discussed in this rolling plan, e.g. big data, e-health, robotics and autonomous systems and so forth. Many of the standardisation activities of these domains will be beneficial for AI and the other way around. For more details, please refer to section