Human-centered AI
Human-Centered AI refers to the initiative at the intersection of the fields of artificial intelligence and human-computer interaction (HCI) to develop artificial intelligence systems in a way that prioritizes human values, needs, and general flourishing[1]. Emphasis is placed on the recognition that artificial intelligence systems are rapidly changing, and will continue to influence, many aspects of the human experience, in areas ranging from scientific inquiry, governance and policy, labor and the economy, and creative expression, with an aim set to adapt current developments and guide future developments on a trajectory which is most beneficial to the human population at large, with the goal of augmenting human intelligence and capacities across these areas, as opposed to replacing them. Particular attention is paid to mitigating negative effects of AI automation on the livelihoods of the labor force, the use of AI in healthcare fields, and imbuing AI systems with societal values. Human-centered AI is linked to related endeavors in AI alignment and AI safety, but while these fields primarily focus on mitigating risks posed by AI that is unaligned to human values and/or uncontrollable AI self-development, human-centered AI places significant focus in exploring how AI systems can augment human capacities and serve as collaborators.
Conceptual History
[edit]The importance of the alignment of artificial intelligence development towards human values in some sense predates artificial intelligence itself, as before the modern conception of artificial intelligence as coined at the 1956 Dartmouth Workshop, the conception of robots as constructed, autonomous agents entered the cultural consciousness as early as the 1920s, with Karel Capek's Rossum's Universal Robots[2]. The imagined issues relating to robots' aims and values requiring intentional alignment and direction with those of humans followed soon after, most widely known from science fiction author Isaac Asimov’s Three Laws of Robotics, dating to his 1942 short story “Runaround”. Two of the three eponymous laws are directly concerned with robots’ interaction with and positioned deference towards humans, and have in recent times been reexamined in the face of modern AI[3]. In 1985, after artificial intelligence research had taken off and its effects were more acutely conceptualized, Asimov added a Rule Zero, treating robots' relationship with humanity as a whole, distinct from individual humans[4]. While modern artificial intelligence is largely distinct from robotics, the conceptualization of both robots and AI systems as autonomous agents positions this as a foundation for conceptions of human-centered AI.
Aside from robots, artificially intelligent autonomous agents in interaction with humans have been conceived of for at least 75 years. In 1950, Alan Turing published his famous "Imitation Game"[5], often also called the Turing Test, a thought experiment that uses human-machine interaction as an assessor for the intelligence of a system. In recent times, artificial intelligence researchers such as Stanford's Erik Brynjolfsson have conceived of rapid AI development leading to a so-called "Turing Trap"[6].
Augmentation & Automation
[edit]A major stated aim of human-centered AI is to promote the development of AI in ways that augment human capabilities, rather than replacing them. To this end, organizations and initiatives that take a human-centered approach to AI development focus on frameworks that encourage collaboration between humans and artificial intelligence systems to build towards even greater progress, rather than attempting to automate tasks currently handled by humans. Such avenues include everything from data visualization for big data, allowing human engineers to better understand extremely large datasets, allowing for the design of better machine learning models to handle them[7], to AI-powered sensors to monitor vitals, allowing for better responsiveness from healthcare providers[8].
Many human-centered AI initiatives often position it as a better alternative to the apparent mainstream in AI development, which is primarily concerned with automation. Driven by the pressures of the market economy, AI development that does replace tasks currently performed by humans with automated processes is incentivized, as it allows for greater profit margins; this often comes at the detriment of the human whose performance is replaced, thus leading to an environment wherein human workers are outcompeted by AI systems across various service-sector and technology-based industries. At the same time, automation and augmentation are not always incompatible; a major aim of human-centered AI is towards the automation of rote tasks that would otherwise hinder a human’s productivity or creativity, freeing them to direct their energy and intelligence towards higher-level tasks, thus achieving augmentation through automation[9].
Research
[edit]Much of the work done on human-centered AI comes from research institutes, within universities, companies, and as freestanding organizations. The Stanford Institute for Human-Centered AI (abbreviated to HAI) is one such group, engaging academics, industry professionals, and policymakers centered in Stanford University to conduct research and inform policy in various areas in human-centered AI, including on aspects of the intelligence itself, augmentation, and on measuring the impacts of AI systems on sociopolitcal and cultural institutions. Similar groups exist at other universities, including the Chicago Human + AI (CHAI) Lab at the University of Chicago and the Human-Centered AI (HAI) Lab at the University of Oxford. Outside of the academy, companies such as IBM have research initiatives dedicated to advancements in human-centered AI.
References
[edit]- ^ "What Is Human-Centered AI (HCAI)? — updated 2025". The Interaction Design Foundation. Retrieved 2025-05-28.
- ^ "The Project Gutenberg eBook of R. U. R. (Rossum's Universal Robots), by Karel Capek". www.gutenberg.org. Retrieved 2025-05-30.
- ^ "Asimov's Three Laws of Robotics, Applied to AI | Psychology Today". www.psychologytoday.com. Retrieved 2025-05-30.
- ^ Eadline, Doug (2024-09-14). "The Three Laws of Robotics and the Future". HPCwire. Retrieved 2025-05-30.
- ^ Turing, Alan (1950). "Computing Machinery and Intelligence". Mind. 59 (236): 433–60. doi:10.1093/mind/lix.236.433.
- ^ "The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence". Stanford Digital Economy Lab. Retrieved 2025-05-30.
- ^ "What is human-centered AI?". IBM Research. 2021-02-09. Retrieved 2025-05-28.
- ^ Wang, Chan; He, Tianyiyi; Zhou, Hong; Zhang, Zixuan; Lee, Chengkuo (2023-08-02). "Artificial intelligence enhanced sensors - enabling technologies to next-generation healthcare and biomedical platform". Bioelectronic Medicine. 9 (1): 17. doi:10.1186/s42234-023-00118-1. ISSN 2332-8886. PMC 10394931. PMID 37528436.
- ^ "A Human-Centered Approach to the AI Revolution | Stanford HAI". hai.stanford.edu. Retrieved 2025-05-28.