- the Road to Artificia
- Posts
- Why the next leaps towards AGI may be “born secret”
Why the next leaps towards AGI may be “born secret”
A new Manhattan Project to build AGI / ASI is near
Did you receive this forwarded from a friend?
[Note: I had planned to publish a piece on battles over AI in the legal and political arenas, but there was a significant development regarding the U.S. government’s role in pursuing Artificial General Intelligence (AGI) on Tuesday Nov. 19th that is too important not to comment on. Here is my piece on that development.]
A new Manhattan Project
On Tuesday, the U.S.-China Economic and Security Review Commission (USCC) presented their 2024 Annual Report. The USCC is an independent commission that reports directly to congress. They are empowered to access information from any government department, including from the U.S. intelligence community. It does not have direct legislative or implementation authority, but its recommendations carry significant weight with congress, and reflect policymaking priorities of the U.S. government.
Of the USCC annual report’s 32 recommendations, right at the top, at number 1 (emphasis added):
The Commission recommends: I. Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would usurp the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress:
|
Let’s look deeper at the context for such a project, and what we can glean specifically from the language of the recommendation.
The National Security State Gets Situational Awareness on AI
In June of this year, Leopold Aschenbrenner published a set of essays called “Situational Awareness: The Decade Ahead”. Aschenbrenner is a former researcher on the OpenAI Superalignment team (a team studying approaches to AI safety), now running an AGI-focused investment fund he started.
In Situational Awareness, Aschenbrenner builds a case for just such a U.S. Manhattan Project for AI. The argument he makes is simple - he spends most of his words on providing a basis for each step in the argument. His conclusions though were controversial enough to draw derision from many in the AI community.
A condensed version of Aschenbrenner’s 165-page argument goes like this:
Based on trends in AI capabilities research since GPT-2, we are on course to expect AGI by 2027. Once AGI capability is available, if labs focus on automating AI research itself, progress in AI should accelerate. If similar progress can be achieved as the phase from GPT-2 to GPT-4, or GPT-4 to AGI, we should expect Superintelligence before the end of the decade. Infrastructure buildout, meaning electricity and compute, will be critical, resulting in the buildout of a $1 trillion compute clusters (n.b. Microsoft announced a $100 Billion cluster earlier this year). Given the power of such models, the inability of frontier labs to protect them from espionage disqualifies them from being the stewards of those models. Superintelligence will be a decisive economic and military advantage. Although the U.S. leads AI development now, the critical nature of new power station construction suggests China could have a decisive advantage.
Given the logic of the his preceding arguments, and the inappropriateness of a startup having control of superintelligence, Aschenbrenner’s June 2024 essay predicted a U.S. AGI Manhattan Project will be started by 2027-2028. Maybe we’re ahead of schedule.
Let’s look more closely at the UPCC recommendation actually says.
“Manhattan Project-Like”
What is implied by the UPCC report’s “Manhattan Project-like” terminology? In common parlance the Manhattan Project has come to be thought of as simply an “all-out” effort. In fact it implies more. The Manhattan Project did more than create the atomic bomb - it created a system of secrecy and classification that continues to this day.
One key concept is the “born secret” doctrine. Established by the Atomic Energy Acts of 1946 and 1954, “born secret” defines areas of scientific and technical knowledge that are classified from their moment of creation - no affirmative act of classification is required, and it doesn’t matter who discovers or develops the knowledge. And indeed, patent filings and academic preprints have disappeared over the years into this system of classification.