Putting AI in the critical loop : assured trust and autonomy in human-machine teams /

"Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams" takes on the primary challenges of bidirectional trust and performance of autonomous systems, providing readers with a review of the latest literature, the science of autonomy, and a clear path towards the...

Full description

Saved in:
Bibliographic Details
Group Author: Prithviraj Dasgupta (Editor)
Published: Academic Press,
Publisher Address: London :
Publication Dates: [2024]
Literature type: Book
Language: English
Subjects:
Summary: "Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams" takes on the primary challenges of bidirectional trust and performance of autonomous systems, providing readers with a review of the latest literature, the science of autonomy, and a clear path towards the autonomy of human-machine teams and systems. Throughout this book, the intersecting themes of collective intelligence, bidirectional trust, and continual assurance form the challenging and extraordinarily interesting themes which will help lay the groundwork for the audience to not only bridge knowledge gaps, but also to advance this science to develop better solutions. The distinctively different characteristics and features of humans and machines are likely why they have the potential to work well together, overcoming each other's weaknesses through cooperation, synergy, and interdependence which forms a "collective intelligence." Trust is bidirectional and two-sided; humans need to trust AI technology, but future AI technology may also need to trust humans.
Carrier Form: xviii, 286 pages : illustrations ; 24 cm
Bibliography: Includes bibliographical references and index.
ISBN: 9780443159886
0443159882
Index Number: Q334
CLC: B82-057
Call Number: B82-057/P993
Contents: Introduction /
Alternative paths to developing engineering solutions for human-machine teams /
Risk determination vs risk perception: from hate speech, an erroneous drone attack, and military nuclear wastes to human-machine autonomy /
Appropriate context-dependent artificial trust in human-machine teamwork /
Toward a casual modeling approach for trust-based interventions in human-autonomy teams /
Risk management in human-in-the-loop AI-assisted attention aware systems /
Enabling trustworthiness in human-swarm systems through a digital twin /
Building trust with the ethical affordances of education technologies: a sociotechnical systems perspective /
Perceiving a humorous robot as a social partner /
A framework of human factors methods for safe, ethical, and usable artificial intelligence in defense /
A schema for harms-sensitive reasoning, an approach to populate its ontology by human annotation /