The term „responsibility chains in AI systems“ belongs to the categories Artificial Intelligence, Digital Society, and Cybercrime & Cybersecurity.
Accountability chains in AI systems describe who is responsible for the development, deployment, and any potential errors or damages caused by an artificial intelligence. As AI systems are often very complex, many different individuals and companies are usually involved: from the developers to the users, and those who make decisions based on the AI.
A vivid example: A company uses AI software to pre-sort applications. If someone is unfairly rejected due to discrimination, the question arises as to who bears responsibility. Is it the software manufacturer, the company using the AI, or the person who made the decision? The chain of responsibility helps to clarify such questions and ensures that all parties involved know their role.
This is why it is important to clearly define who is responsible for what from the outset, during the development and deployment of AI systems. This allows risks to be better managed and trust in AI to be strengthened.













