top of page

Sam Altman asks: Should OpenAI let GPT-4 off the leash?



In a recent interview, OpenAI's CEO, Sam Altman, suggested that the company might have a responsibility to release GPT-4 "off the chain" as a powerful demonstration of AI capabilities, similar to the impact of the Hiroshima bombing. The question is whether this could prompt global action towards AI development, and Altman appears to be considering the possibility.


ChatGPT, a language model developed by OpenAI, has convinced many people, especially those closely involved, that we are at a critical juncture in human history. While past technological advancements such as fire, the wheel, science, money, electricity, the transistor, and the internet have greatly increased human power, AI is different. It aims to create machines that could be our equals and eventually surpass us. OpenAI is an extraordinary organization composed of intelligent, effective, and well-intentioned individuals. Its semi-capitalist structure and transparent approach to its creations demonstrate a commitment to serving the public good and limiting the potential societal harm that could result from AI development.



OpenAI created ChatGPT to send a message to humanity: "This is just a rudimentary version of what's to come. You need to examine it closely, comprehend its potential, and recognize the immense risks it poses, including the possibility of existential threats to humanity. You must act quickly and have a voice in its future development because it will not remain rudimentary for long. Soon it will be highly capable, essential, and potentially uncontrollable." OpenAI's approach is radically transparent, cautious, and different from what is typical in the tech industry. Those at the forefront of such technologies should understand the responsibility they hold. In his recent two and a half-hour interview with Lex Fridman, a prominent AI researcher and podcast host, OpenAI CEO Sam Altman demonstrated his understanding of the weight of this responsibility. This interview captures a significant moment in history as Altman grapples with the transformative potential and potential existential risks of AI. Even Altman, like most people, is somewhat apprehensive about what the future holds for AI.


Humans lack the ability to construct such advanced AIs. Developing a language model like ChatGPT is as enigmatic and intricate as comprehending the human brain. Instead of explicitly programming GPT, OpenAI and other developers have created the environment and conditions for GPT to create itself. This remarkable feat of transformation did not create a human-like consciousness. Instead, it generated a form of intelligence that is wholly unfamiliar to us. No one can accurately depict GPT's experiences or elucidate how it produces responses to various inputs. The mechanics of its operations are still not entirely understood. However, through copious amounts of human language exposure, GPT has learned to simulate consciousness and translate messages seamlessly between humans and machines. Its fluid and elegant execution is unparalleled.



GPT may not be human, but it has been shaped by humanity. It has ingested more written material than any human ever has, making it capable of mirroring both our best and worst qualities. However, GPT-4 is not just a tool to be unleashed without caution. OpenAI spent eight months carefully training and constraining it before releasing it to the public.

While Frankenstein may have agonized over flipping the switch on his monster, OpenAI CEO Sam Altman does not have that luxury. With numerous companies vying for a piece of the trillion-dollar AI market, it is crucial that the private and public sectors rapidly adapt to this technology, with a particular focus on ethics and safety. Stanford University has already demonstrated that anyone can create a rudimentary ChatGPT clone for a mere hundred dollars, and with thousands of such models soon to exist, each imbued with its own set of ethics and standards, the need for caution and responsibility is more pressing than ever.


Moreover, even if the world could reach a consensus on limitations for AI today, there is no guarantee that we can maintain control over them once they attain a certain level of advancement. This is referred to as "alignment" in the AI community, which involves ensuring that the AI's interests are in line with ours. It remains unclear how we can accomplish this task as the AI continues to evolve. The most pessimistic perspective is that a superior Artificial General Intelligence (AGI) will inevitably exterminate humanity with near-certainty. As decision theory and AI researcher Eliezer Yudkowsky phrased it, "The fundamental challenge I am suggesting is too complicated: to obtain any possibility of survival by any means."

During the interview with Fridman, Altman emphasized, "I want to be crystal clear: I don't think we have figured out a way to align a super powerful system yet. We currently employ RLHF (Reinforcement Learning from Human Feedback) that works at our current level."

Given the significance of the situation, here are some notable quotations from Altman taken from the insightful conversation with Lex.



Altman's message may not be crystal clear from any single quote, but it appears that he believes that unleashing GPT-4 without safety protocols could lead to shocking consequences that could spur the world into action. While GPT-4 may not be powerful enough to destroy civilization, Altman wonders whether OpenAI has a responsibility to demonstrate its power in a shock-and-awe manner to force the world to take action before subsequent AIs are developed that do have the power to end us. This decision could make Altman seem like a supervillain or could genuinely give the world an early jolt, but it could also prove to be woefully inadequate or backfire and lull people into a false sense of security. While the complexity of this issue is difficult to convey in a single quote, Lex's interview with Altman is worth watching to get a better understanding of his thoughts and concerns.

Altman suggested that OpenAI's GPT-4 is not an artificial general intelligence (AGI) but the most complex software object humanity has produced to date. Altman stated that OpenAI was building the technology in public to enable the world to shape the way it's developed and that the collective intelligence and creativity of the world could beat OpenAI and all of the red teamers it could hire. When asked if the tool is currently being used for good or evil, Altman said there will be harm caused by the tool, but it could also bring about tremendous benefits. Altman concluded that humanity needs to have a deliberative conversation about where to draw boundaries for the system and establish the overall rules of the system.




13 views0 comments

Recent Posts

See All

Σχόλια


bottom of page