Professionals’ Perception and Trust in AI Predictions in High-Risk Contexts
DOI:
https://doi.org/10.47363/JAICC/ICADCCS2026/2026(5)9Keywords:
AI Trust Calibration, High-Stakes Decision-Making, Uncertainty Communication, Human-AI Collaboration, Explainable AIAbstract
This presentation examines how professionals perceive, calibrate, and operationalize trust in AI-driven predictions when decisions carry high stakes and low tolerance for error. We explore the psychological and organizational drivers of trust—such as perceived competence, transparency, accountability, and prior experience—as well as factors that undermine it, including model opacity, uncertainty miscommunication, and automation bias. Building on real-world high-risk scenarios (e.g., healthcare, critical infrastructure, and safety-relevant operations), we discuss how explanation quality, confidence/uncertainty reporting, and humanin- the-loop workflows affect decision quality and responsibility allocation. The talk proposes practical design and governance recommendations to improve appropriate reliance, including uncertainty-aware interfaces, auditability, training for judgment calibration, and socio-technical safeguards that align AI outputs with professional standards and regulatory constraints. The goal is to move beyond “trust vs. distrust” toward measurable, context-sensitive trust calibration that improves outcomes without eroding accountability.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Journal of Artificial Intelligence & Cloud Computing

This work is licensed under a Creative Commons Attribution 4.0 International License.