Skip to main content

Code of Conduct (Ethical principles)

The Code of conduct proposed under ETAPAS solutions is intended to provide public organisations with a starting point in the development of their own and personalised CoC. It encompasses 10 principles that shall serve as both affirmations of values and accountability mechanisms for the organisation.

The public sector’s use of disruptive technologies must conform to the principles of environmental sustainability. The public sector has a responsibility to ensure that the use of these technologies contributes to, or at the very least does not impede, European and national policies to achieve climate neutrality. The storage of data and running of computer programs should be adjusted to minimize energy consumption and ensure climate neutrality. Environmental sustainability also puts strict demands on the production processes for the hardware and the materials used to produce the new technology. New materials, such as nanomaterials, often have toxic and ecotoxic properties that need to be carefully evaluated in advance of any use that can affect the environment. Biodiversity is a precondition for ecological resilience, and it must always have an important role in the evaluation of the environmental effects of new technologies. Production processes must be assessed globally. The public sector should take the lead in ensuring that when importing technological products, we do not thereby export environmental problems. New technologies should be used in ways that respect animal rights and improve rather than deteriorate animal welfare.

Fairness, equal treatment and the rule of law are fundamental values from which public administration should never deviate. This includes equality between women and men as well as equality between people with different ethnic, religious, sexual or other identities or identifications. New technologies can be harnessed to implement these values and to ensure the basic principle that like cases should be treated alike. The introduction of technologies that promote equality of access and opportunity should be strongly prioritized. But technologies can also, inadvertently, have the opposite effect. For instance, algorithms can become biased against minorities or against women if they are trained on data that reflects previous bias and discrimination. This can happen even if direct use of information about group membership is excluded from the algorithm’s input; for instance, home addresses can serve as a proxy for other characteristics that lead to bias. Careful planning is needed to avoid the introduction of bias as an unintended consequence of a new technology. Training data and other inputs that contribute to the shaping of artificial intelligence, algorithmic decisions or machine learning, must be carefully selected and evaluated in order to make sure that discrimination or other undesired effects are not introduced. Automated decision-making that impacts on individuals should only be used when there is reason to be confident that the algorithm does not discriminate against any group of individuals. New technologies should be implemented in an inclusive way, and the creation of new inequalities between groups with different degrees of access (“digital divides”) must be avoided. Particular emphasis should be put on the rights of children and children’s welfare.

All public decision-making should follow the principle of inclusivity for all members of society. Individuals have the right to know the grounds of decisions affecting them. In many cases they also have the right to appeal to a decision-maker at a higher level, and that right can only be efficiently exerted if the grounds for the original decision are available. Individuals who are affected by a public decision based on automated data processing should have access to clear information that a layperson can understand both on how the decision was made (transparency), and on its justification (explainability). Information must also be presented in an accessible way for disabled and elderly people. Particular emphasis should be put on access and explainability for people with cognitive difficulties. When children are affected they should receive information in a form suitable for their age. Public decisions affecting an individual should be based on criteria that are relevant for the decision. Therefore, the use of erratic and unpredictable decision support systems is not acceptable.

The public sector is subject to strict principles of accountability. Ultimately, elected representatives are accountable to the citizens for the activities of public authorities and other entities under their direction. This means that blaming a machine, an algorithm or a decision support system is a particularly poor option in the public sector. In the end, humans will be held responsible for decisions that have effects on individuals or society. There must therefore always be sufficient human oversight and control of automatic decision-making to ensure that human decision-makers can be held accountable. Public servants who oversee work that involves disruptive technologies must have the requisite time and resources to actually fill the responsibilities assigned to them. Adequate procedures must be in place to ensure that individuals who have questions or complaints about a decision can communicate with a responsible person. Public sector organizations should also have procedures for investigating and taking measures against problems arising in their use of disruptive technologies.

Risks of both intentional and unintentional harm should be carefully evaluated before the introduction of disruptive technologies, and high priority should be assigned to the safety and security of all who are affected. For instance, automation can be harmful to both physical and mental health. Both technological and organizational measures are needed to prevent this from happening. Some disruptive technologies give rise to specific security risks that must be attended to. Strong protection against hacking and other forms of adversarial intrusions and attacks must be implemented. It is also imperative to prevent the manipulation and misuse of data. Any use of sources that accept data without credible authentication of the source must include a careful investigation of the ways in which the data could have been manipulated. Whenever possible, only authenticated data should be used. This is necessary to uphold the integrity of public decision-making.

Public administrations have access to large quantities of data, collected for various purposes. The use of such data can have unintended harmful consequences for individuals, not least when information from many sources is combined. The protection ensured by the GDPR should be complemented with additional regulation whenever needed. Data should be as efficiently anonymized as possible, and additional efficient measures to prevent and discover unauthorized use should be implemented. In particular, the combination of data from different sources should be subject to strict regulations, and protective measures such as informed consent and anonymization should be implemented to avoid misuse. Best-practice methods for data protection, including deletion mechanisms, should be used, and these methods should be regularly updated. The collection and use of new types of personal sensitive data, such as biological data and data from face recognition technology, should only be considered after broad public consultations.

The employees are the most important assets of public administration. Their participation in the creation of an ethical culture at the workplace is essential for the functionality and credibility of the public sector. New technologies should not be used to introduce more invasive forms of control of workplace behaviour, since this can infringe on privacy and be dehumanizing, demoralizing, and destructive to the mental and physical health of workers. Instead, new technologies can and should be used to relieve public servants of routine work and make better use of their competences. If a new technology has impact on the employees and their working conditions, then its introduction must be decided in a participative and co-creative process involving them and their organizations. Public servants should be offered the education and training needed to improve their skills in the ethical use and management of new technologies, with the twofold objective of upskilling and reskilling. They should also be made aware of their rights in relation to their own data, whenever these may be affected by their employer’s use of disruptive technologies.

Ensuring the wellbeing of all residents must be a priority of the public sector, also in the adoption of new technologies. A common concern with new technologies is that lack of human agency can reduce the social contacts that are crucial for the functioning and the social cohesion of our societies. Contacts with providers of public services, especially social and health-related services, can be important and often indispensable parts of a person’s social network. New technologies can and should be used to restructure the provision of public services in ways that improve its contacts with residents. New technologies should not be used in ways that weaken social networks and make residents more isolated. The introduction of alternative human-based modes of contact can in some cases be useful to avoid such negative effects.

Public-private cooperation is needed to solve many problems in the public sector, not least the development and adaptation of new technologies. The public sector should have enough competences of its own and ability to control and review private sector involvement to ensure that such cooperation works efficiently in the public interest. All public-private cooperation should comply with the ethical principles adopted for the public sector. Private sector performance and accountability should be ensured through appropriate contractual protections. All procurement of privately owned technology should be based on competent assessment of the technology, including its ethical aspects.

In its supervision of private sector activities, the public sector should employ adequate technology, for instance for the automatic analysis of large amounts of data, in ways that prevent infringements on privacy. Artificial decision support systems in the private sector that are subject to public supervision must be sufficiently transparent to make sure that efficient supervision can be performed.

The social effects of disruptive technologies are difficult, often impossible, to foresee. Due to this uncertainty, the full-scale introduction of such technologies should whenever possible be preceded by carefully evaluated trials that are performed under realistic conditions. Subsequently, when the technology is introduced on a larger scale, evaluation should continue in order to detect (positive and negative) effects that may not have been discovered in the small-scale trials. These evaluations should consider all the aforementioned principles. Further, they should include the perspectives of residents and employees, particularly groups at risk of exclusion or discrimination from technologies. Research on the social effects of new technologies should also be furthered. Adjustments, improvements and when necessary, replacement of technologies should be made whenever needed.