The shaping of digital responsibility and thus also the question of how to deal responsibly with AI is a task for society as a whole. The use of AI offers opportunities, but also poses challenges, especially reputational and liability risks. HT d.d. is aware of the benefits that AI offers but also the challenges that AI like any new technology brings. With the adoption of digital ethics guidelines on AI in DT, which establish the principles and the way to use AI, there is a need to identify and assess behaviors that are contrary to these guidelines in order to prevent negative effects of such behaviors on the company and employees so the new risk of “Disregard of Ethical Guidelines in AI Projects“ is introduced as the new risk in compliance risk assessment.
Examples of situations which may pose a risk “Disregard of Ethical Guidelines in AI Projects“ and which are listed in the compliance risk assessment:
- Telekom unit or sales partner of the Telekom unit uses a bot with AI for social media contributions. After a short time, this bot posts racist contributions and/or fake news.
- A (purchased) AI is used for a project, which passes on costumer data to third parties.
- AI collects and combines data to draw conclusions about individuals, far beyond their consent and knowledge (e. g. discriminatory AI-supported decisions regarding hiring and contract offers).
- An entity uses a chatbot (text or voice) to interact with customers. However, the technical solution does not identify itself as a bot and pretends to interact with a person to the customer.
- An AI discriminates against certain groups of people based on gender, age, ..... (e.g., in terms of hiring and contract offers).
Examples of red flag questions on the basis of which the existence of a risk can be determined.
- Are there projects in your company in which AI is used and / or developed?
- Could people be seriously harmed by the use of your AI? (physical, material, immaterial (the latter e. g. damage to reputation)?
- Can responsible persons be found at any time who can take necessary actions to reduce extent of damage (emergency stop, workaround) in case of a major AI malfunction?
- Are AI purchased from non-EU countries or developed there? (The background to this is the verification of compliance with data protection regulations in order to comply with the GDPR/ GDPR of the purchased technology. The problem can be e. g. the deviating data protection standards from non-EU countries).
Given the growing prevalence and use of AI in projects, behaviors contrary to the digital ethics guidelines on AI are posing a business risk, therefore disregard of such guidelines was introduced as a new compliance risk in the compliance risk assessment. Compliance risk assessment is a key step in implementing an effective, proactive and sustainable compliance program. Results of the risk assessment as well as existing ones help us to develop appropriate measures to ensure compliance with the digital ethics guidelines on AI and in the long run raise awareness of the need to adhere to such guidelines as well as identify areas for improvement. All of the above will have a positive impact on business, reduce the negative effects that disregard of ethical guidelines can have in AI projects, products and services as well as increase the already established trust and satisfaction of customers, employees and other stakeholders in our company.