Ende dieses Seitenbereichs.

Beginn des Seitenbereichs: Inhalt:

Caution or Trust in AI? How to design XAI in sensitive Use Cases?

Sonntag, 09.07.2023

Neuer Konferenzbeitrag von Dr. Jürgen Fleiß, Univ.-Prof. Dr. Stefan Thalmann et al.

 

Caution or Trust in AI? How to design XAI in sensitive Use Cases?

Artificial Intelligence (AI) becomes increasingly common, but adoption in sensitive use cases lacks due to the black-box character of AI hindering auditing and trust-building. Explainable AI (XAI) promises to make AI transparent, allowing for auditing and increasing user trust. However, in sensitive use cases maximizing trust is not the goal, rather to balance caution and trust to find the level of appropriate trust. Studies on user perception of XAI in professional contexts and especially for sensitive use cases are scarce. We present the results of a case study involving domain-experts as users of a prototype XAI-based IS for decision support in the quality assurance in pharmaceutical manufacturing. We find that for this sensitive use case, simply delivering an explanation falls short if it does not match the beliefs of experts on what information is critical for a certain decision to be reached. Unsuitable explanations override all other quality criteria. Suitable explanations can, together with other quality criteria, lead to a suitable balance of trust and caution in the system. Based on our case study we discuss design options in this regard.

Kloker, A., Fleiß, J., Koeth, C., Kloiber, T., Ratheiser, P. und Thalmann, S. (2022): Caution or Trust in AI? How to design XAI in sensitive Use Cases?, in: Americas Conference on Information Systems (AMCIS) Proceedings, Vol. 16, Minneapolis, 10. bis 14. August 2022, pp. 1-10, Online: https://aisel.aisnet.org/amcis2022/sig_dsa/sig_dsa/16/.

 

Weitere Publikationen finden Sie hier.

Ende dieses Seitenbereichs.

Beginn des Seitenbereichs: Zusatzinformationen:

Ende dieses Seitenbereichs.