Adversarial AI is a term from the categories Artificial Intelligence, Cybercrime and Cybersecurity, and Digital Transformation. It describes the use of artificial intelligence with the aim of tricking, outsmarting, or attacking other AI systems.
Imagine a company using an AI to filter out fraudulent emails. However, criminals can use adversarial AI to specifically modify emails so that they are not recognised by this filter AI. The malicious emails then reach the recipient unnoticed.
Adversarial AI therefore works like a digital trickster: it recognises weaknesses in the AI and exploits them in a targeted manner. This can cause problems in many areas - for example in autonomous driving, when traffic signs are manipulated so that they are incorrectly recognised by a vehicle AI.
That is why it is important for companies today to secure their systems not only against classic hackers but also against attacks with adversarial AI. Only in this way can the security of artificial intelligence be guaranteed in the long term and user trust be maintained.













