The article discusses the importance of combining human control and "human-in-the-loop" in the development and operation of artificial intelligence (AI) systems. The authors argue that simply relying on automation or solely using human decision-making is insufficient, as it can lead to unethical or unreliable outcomes. To address this challenge, researchers are exploring various approaches, including developing standardized evaluation methods, improving communication between humans and AI agents through natural language interfaces, and incorporating "meaningful human control" into AI system design.
The article highlights the need for a comprehensive understanding of the role of humans in the loop when evaluating the performance of human-machine teaming. Evaluation metrics should consider not only task performance but also factors such as trust, situation awareness, workload, and cognitive load. The authors note that natural language interfaces can facilitate communication and collaboration between humans and AI agents, leading to more trustworthy and transparent systems.
The article concludes by acknowledging the challenges of incorporating "moral responsibility" and "socio-technical" factors into AI system operation. To address these challenges, the authors propose four additional properties to improve the original definition of meaningful human control: an explicit moral operational design domain, appropriate and mutually compatible representations, control ability and authority, and an explicit linkage between AI and human actions.
In summary, the article emphasizes the importance of considering human values and ethical behavior in the development and operation of AI systems. By combining meaningful human control with human-in-the-loop, researchers aim to create more reliable and trustworthy AI systems that can be used in various applications, from healthcare to transportation.
Artificial Intelligence, Computer Science