On the second day of the European Dialogue on Internet Governance (EuroDIG) conference in Strasbourg, France, the focus was on the links between artificial intelligence and discrimination, and the impact of neurotechnologies on privacy.
The workshop AI & non-discrimination in digital spaces: from prevention to redress discussed the discriminatory results and effects of artificial intelligence, and considered how these technologies could promote the right to equal treatment. It was emphasised that it is initially very difficult to determine whether artificial intelligence produces discriminatory results intentionally, for example due to its programming or design, or unintentionally. A lack of transparency and knowledge of how these systems work complicates the clarification process. Therefore, it was agreed that providers and developers of AI should be encouraged to provide more clarity about programming and processes. They should also be held responsible for the results. The European Union's AI Act therefore stipulates high safety standards, particularly for applications that can significantly impact people and their rights. However, the issue of whether and how programmers can foresee these effects was discussed. To ensure this in future, it was advocated that a binding human rights impact assessment be implemented, and human oversight was called for. Regarding the initial question of how these technologies can promote the right to equal treatment, there was a general consensus that artificial intelligence cannot prevent discrimination because it is based on perspectives and mechanisms that perpetuate discrimination. However, AI could help recognise discrimination and thus contribute to preventing it.
In the plenary hall of the Council of Europe, the need for a new category in the area of data protection in connection with developments in neurotechnologies was discussed under the heading 'Neurotechnology and privacy: Navigating Human Rights and Regulatory Challenges in the Age of Neural Data'. Ara Brian Nougreres, the UN Special Rapporteur on the Right to Privacy, argued for a new classification of data that is particularly worthy of protection. Neurotechnological systems breach the privacy boundary and thus provide access to data that can be used to draw conclusions about our thoughts, feelings, and identity. To preserve the integrity of human mind, the highest standards of protection must therefore be applied to this data. However, Damian Eke, founder of the African Data Governance Initiative, and Petra Zandonella, a researcher at the University of Graz, spoke out against the creation of a new data protection category. Despite the special significance and value of this data, they argued that no new classification should be developed; rather, existing rights to the protection of personal data should be interpreted broadly to afford comprehensive safeguards. This is also supported by the lack of consensus within the scientific community on whether, and if so how, a distinction should be made between neurological, cognitive, biometric and genetic data. Therefore, it may be useful to include these types of data in the existing health data classification.
The concept of personal integrity, enshrined in the German Youth Protection Act in 2021, was introduced to the debate. While this aims to prevent misleading influences against one's own convictions based on data that has already been collected and evaluated, the aim with regard to neurological data is to prevent manipulation based on information that has not yet formed in a person's mind and has not necessarily been perceived by them. It remains to be clarified whether this will lead to the concept of personal integrity being further developed or replaced.