OpenAI CEO Sam Altman, a prominent figure in the AI landscape, has recently stirred quite a bit of controversy with his perspective on AI’s future. Altman’s remarks about the “median human” and the potential of AI have sparked intense discussions and raised important questions about the direction AI is heading.
Sam Altman’s Vision
Sam Altman’s vision for artificial general intelligence (AGI) is a significant departure from the norm. He envisions AGI as having the intelligence level of a “median human that you could hire as a co-worker.” This vision raises eyebrows because it suggests that AGI could potentially replace the work of average individuals across various professions.
This perspective isn’t entirely new; Altman previously expressed similar ideas during a 2022 interview on the Lex Fridman podcast. In that conversation, he proposed that AGI could perform tasks ranging from medical practice to coding, essentially covering a broad spectrum of human employment.
The Unsettling Terminology
What makes Altman’s vision particularly striking is the terminology he employs, specifically, the use of the phrase “median human.” This term, which implies a statistical average, is both disconcerting and problematic in the context of AI. It seems to reduce the complexity of human capabilities to a quantifiable measure, raising questions about the nuances of human intelligence and experience.
Altman’s choice of the word “median” is significant. It introduces a level of ambiguity and subjectivity into the equation. Defining what constitutes a “median” human in terms of intelligence or capabilities is a challenging task and could vary greatly depending on perspective.
As writer Elizabeth Weil notes in a new profile of OpenAI CEO Sam Altman in New York Magazine, the powerful AI executive has a disconcerting penchant for using the term “median human,” a phrase that seemingly equates to a robotic tech bro version of “Average Joe.”
The Ethical Concerns
Critics and experts within the AI field have raised ethical concerns regarding this perspective. Brent Mittelstadt, the director of research at the Oxford Internet Institute, finds the comparison between AI and the “median human” offensive and concerning. He emphasizes the lack of a concrete, measurable comparison of human intelligence within AI research, further highlighting the vague nature of the concept.
Henry Shevlin, an AI ethicist and professor at the University of Cambridge, adds that equating AI with intelligence is a sensitive issue. While AI may achieve typical human-level performance in some areas, true human intelligence encompasses more complex aspects that aren’t easily replicated by machines.
The Broader Implications
Sam Altman’s perspective on AGI has generated significant attention because of his influential position in the AI community. While he has advocated for using AI to address critical global challenges, including climate change and Universal Basic Income, his views on replacing “median” humans with AI have sparked debate about the ethical and societal implications of AGI.
As we move forward in the AI era, it is essential to consider the broader implications of AI development. Questions about the nature of human intelligence, the roles AI should play, and the ethical boundaries of AI are becoming increasingly relevant. The future of AI should be guided by a thoughtful and inclusive discussion that considers the interests and values of all stakeholders.
Sam Altman’s vision for AI and the “median human” raises thought-provoking questions about the future of technology and society. It prompts us to reflect on the evolving relationship between humans and AI, emphasizing the importance of responsible AI development and ethical considerations in shaping this future.