Day 3 of the IGF 2024 was the day of the Dynamic Coalitions. They started with a main session focusing on the Coalitions contributions the implementation of the Global Digital Compact.
As it turned out 21 out of 31 Dynamic Coalitions ware working to achieving the 5 Main Objectives of the GDC.
- Objective 1: Bridging Digital Divides and Accelerating Progress Towards the SDGs
- Objective 2: Expanding Digital Economy Inclusion and Benefits for All
Objective 3: Fostering a Safe, Secure, and Inclusive Digital Space that Upholds Human Rights - Objective 4: Advancing Responsible, Equitable, and Interoperable Data Governance
- Objective 5: Enhancing Global AI Governance for Humanity’s Benefit
Obviously the work of the Dynamic Coalition on Children’s Rights in the Digital Environment (DC CRIDE) fits under Fostering a Safe, Secure, and Inclusive Digital Space that Upholds Human Rights. How members of the DC are working towards this objective was elaborated on in the session with a reference to children making one third of all Internet users worldwide.
Knowing how old the users are would therefore seem to make sense not online for the providers of services and platforms but also for digital devices that are connected to the internet, so called Internet of Things (IoT). This was subsequently discussed in the joint workshop of the Dynamic Coalition on IoT and the DC CRIDE titled Age-Aware IoT – better IoT.
Maarten Botterman started the session with a reference to IoT as perpetually and very quick emerging technologies which let Jutta Croll to the conclusion that IoT need to react to the principle of children’s evolving capacities according to Art. 5 of the UN-CRC. Then she pointed out that respecting children’s rights in the digital environment needs knowledge of their age but, that doesn’t mean revealing neither their identity nor their date of birth. Keeping the anonymity of users is an expectation and a requirement that needs to be up-holded, especially in the light of current developments like banning children under the age of 16 from social networks. In this regard Sonia Livingstone stressed children’s right to access to information and to raise their voice, they must not be excluded, she said. Jonathan Cave addressed the challenge of data minimisation in age assurance mechanisms. Data protection law like the GDPR set high barriers for the protection of children’s data, when at the same time age verification might lead to more processing of data, although with good intentions. He declared privacy and safety are hygiene issues on the Internet and pleaded for the tech companies’ duty of care in this regard. When Maarten brought dynamic labelling and certification schemes they are developing at DC IoT into the debate, Abilash Nair raise the question whether these should be mandated by law. From his point of view self- regulation hasn’t worked for over three decades as well as laws on age limits on the Internet haven’t been enforced. Both Sabrina Vorbau and Pratishta Arora referred to the role of parents, they should neither for themselves nor for their children become over-dependent on tools for safety, said Prathista, while Sabrina asked for an risk assessment based approach to age assurance and referred to the Age Assurance Toolkit developed by European Schoolnet that is supposed to guide platform providers to settle AA where it is necessary.
Concluding the session participants and speakers consented on the following take aways and action points for the future.
Take aways
1. Need to have a global understanding of good practice, backed up by (global/industry) standards and (regional/national legislation), all informed by global insights
2. Capacity development for all stakeholders to (1) be aware; (2) be able to express; (3) be able to act
3. Accountability is key: who is accountable to what
Action points
1. Involve governments in the design development and backing of good practice standards regarding children in and their use of IoT (including child’s rights impact assessment)
2. Identify good practice examples and approaches from around the world (including privacy preserving age verification methods) and consider shared lessons regarding children’s rights
The afternoon workshop Regulating AI and Emerging Risks for Children's Rights, organised by the5Rights Foundation, discussed the need to regulate artificial intelligence and the rationale for doing so from a children's rights perspective. To this point, Nidhi Ramesh, Youth Ambassador of the Foundation, provided information on the views of young people. She pointed out that artificial intelligence is used in almost all products and services that children and young people interact with on a daily basis. Nevertheless, many people are not aware that they are interacting with AI and are unable to understand or explain how these technologies work and influence their lives. It is essential to understand how decisions are made in order to be able to understand and evaluate them. She also noted that even beneficial offers, such as learning or cultural resources, can have negative effects. She was concerned that it could change and affect young people's creativity and knowledge acquisition. Last but not least, she called for the well-being of young people to be central to the protection of personal data and privacy.
This perspective was also taken up by Jun Zhao and Ansgar Koene in their contributions. Jun Zhao emphasised that young people already use artificial intelligence twice as often as adults. Their data is collected and analysed using toys, among other things, which are connected to the digital environment. This example emphasised the comments that children often do not know if and when they are acting with artificial intelligence and gives further reason to protect their data. Furthermore, studies show that artificial intelligence can also lead to unjustified decisions and recommendations. On the one hand, this is in line with the child's right to equal treatment and, on the other hand, can pose risks to the development of children and young people if AI suggestions are not beneficial or are even harmful to them. This makes it all the more important to develop and implement standards and regulations for the development and use of artificial intelligence. Following the adoption of theEuropean Union's AI Act, practical guidelines for implementation now need to be put in place. These should provide developers and providers of AI with clear guidelines and orientation on how children's rights can be taken into account in the processes for developing the technologies. In this regard, Beeban Kidron announced that the 5Rights Foundation is working on such guidelines and will present an AI code for children in a few months' time, which will pursue this very goal and take children's rights into account as early as the design process (child rights by design).