Misplaced Pages

Leopold Aschenbrenner: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 17:50, 27 December 2024 editProfGray (talk | contribs)Extended confirmed users3,062 edits +Category:OpenAI people; +Category:Artificial intelligence researchers using HotCat -- two suitable, what others to add?← Previous edit Latest revision as of 21:28, 15 January 2025 edit undoBamyers99 (talk | contribs)Extended confirmed users110,852 edits +Category:Year of birth missing (living people); +Category:Living people using HotCat 
(4 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{Short description|AI researcher}}
Leopold Aschenbrenner is an ] (AI) researcher who was fired by ], the firm behind ], and subsequently published a popular essay about the eventual security risks from future AI technologies.
'''Leopold Aschenbrenner''' is an ] (AI) researcher. He was part of ]'s "Superalignment" team, before he was fired in April 2024 over an alleged information leak. He has published a popular essay called "Situational Awareness" about the emergence of ] and related security risks.


== Biography == == Biography ==
Aschenbrenner was born in Germany. He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. He also has some experience with the ] movement.<ref>{{Cite web |last=Allen |first=Mike |date=2024-06-23 |title=10 takeaways: AI from now to 2034 |url=https://www.axios.com/2024/06/23/leopold-aschenbrenner-ai-future-silicon-valley |access-date=2024-12-27 |website=Axios |language=en}}</ref> As of 2024, he lived in the Bay area. He worked on a team at OpenAI that handled ] with future human objectives. Aschenbrenner was born in Germany. He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. He also has some experience with the ] movement.<ref>{{Cite web |last=Allen |first=Mike |date=2024-06-23 |title=10 takeaways: AI from now to 2034 |url=https://www.axios.com/2024/06/23/leopold-aschenbrenner-ai-future-silicon-valley |access-date=2024-12-27 |website=Axios |language=en}}</ref>


=== OpenAI ===
In 2023, Aschenbrenner wrote to the OpenAI board of directors about the possibility of industrial espionage by Chinese and other non-U.S. entities. He provided confidential company information to outsiders and was dismissed by the company. He alleged that he was fired for political reasons. OpenAI stated that he was not fired because of the security issues he had emphasized and disputed his opinions about security risks.<ref>{{Cite news |last=Metz |first=Cade |date=2024-07-04 |title=A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too |url=https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-url=http://web.archive.org/web/20241226102437/https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-date=2024-12-26 |access-date=2024-12-27 |work=] |language=en}}</ref>
Aschenbrenner joined OpenAI in 2023, on a project called "Superalignment" that researches how potential future ] could be ] with human values.<ref>{{Cite web |last= |first= |date=2024-07-02 |title=Ex-OpenAI employee writes AI essay: War with China, resources and robots |url=https://www.heise.de/en/news/Ex-OpenAI-employee-writes-AI-essay-War-with-China-resources-and-robots-9786856.html |access-date=2024-12-28 |website=heise online |language=en}}</ref>


In April 2023, a hacker gained access to OpenAI's internal messaging system and stole information, an event that OpenAI kept private.<ref>{{Cite news |last=Metz |first=Cade |date=2024-07-04 |title=A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too |url=https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-url=http://web.archive.org/web/20241226102437/https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-date=2024-12-26 |access-date=2024-12-27 |work=] |language=en}}</ref> Subsequently, Aschenbrenner wrote a memo to OpenAI's board of directors about the possibility of industrial espionage by Chinese and other foreign entities, arguing that OpenAI's security was insufficient. According to Aschenbrenner, this memo led to tensions between the board and the leadership about security, and he received a warning from human resources. OpenAI later fired him in April 2024 over an alleged information leak, which Aschenbrenner said was about a benign brainstorming document shared to three external researchers for feedback. OpenAI stated that the firing is unrelated to the security memo, whereas Aschenbrenner said that it was made explicit to him at the time that it was a major reason.<ref>{{Cite web |last=Altchek |first=Ana |title=Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers' |url=https://www.businessinsider.com/former-openai-researcher-leopold-aschenbrenner-interview-firing-2024-6 |access-date=2024-12-28 |website=Business Insider |language=en-US}}</ref><ref>{{Cite web |date=June 6, 2024 |title=Ex-OpenAI Employee Reveals Reason For Getting Fired, "Security Memo Was..." |url=https://www.ndtv.com/feature/ex-openai-employee-reveals-reason-for-getting-fired-security-memo-was-5833447 |access-date=2024-12-28 |website=NDTV |language=en}}</ref> The "Superalignment" team was dissolved one month later, with the departure from OpenAI of other researchers such as ] and ].<ref>{{Cite web |last=Field |first=Hayden |date=2024-05-17 |title=OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it |url=https://www.cnbc.com/2024/05/17/openai-superalignment-sutskever-leike.html |access-date=2024-12-28 |website=CNBC |language=en}}</ref>

=== Investment ===
Aschenbrenner said that he started an investment firm with investors ] and ], ], and ].<ref>{{Cite web |title=Post Script: Patrick Collison the Swiss dictator; McKillen-Bono whiskey lands ex-Bank of Ireland governor |url=https://www.businesspost.ie/article/post-script-patrick-collison-the-swiss-dictator-mckillen-bono-whiskey-lands-ex-bank-of-ireland-gov/ |access-date=2024-12-27 |website=www.businesspost.ie |language=en-US}}</ref><ref name=":0" /> Aschenbrenner said that he started an investment firm with investors ] and ], ], and ].<ref>{{Cite web |title=Post Script: Patrick Collison the Swiss dictator; McKillen-Bono whiskey lands ex-Bank of Ireland governor |url=https://www.businesspost.ie/article/post-script-patrick-collison-the-swiss-dictator-mckillen-bono-whiskey-lands-ex-bank-of-ireland-gov/ |access-date=2024-12-27 |website=www.businesspost.ie |language=en-US}}</ref><ref name=":0" />


== Situational Awareness essay == == Situational Awareness essay ==
He wrote a 165-page essay that has been seen as a manifesto about the risks of ], "Situational Awareness: The Decade Ahead." His approach has been described as optimistic determinism. "Situational Awareness" has sections that predict the emergence of AGI, imagine a path from AGI to "superintelligence," describe four risks to humanity, outline a way for humans to deal with superintelligent machines, and articulate the principles of an "AGI realism." He specifically warns that the United States needs to defend against the use AI technologies by countries such as Russia and China.<ref name=":0">{{Cite news |last=Naughton |first=John |date=2024-06-15 |title=How’s this for a bombshell – the US must make AI its next Manhattan Project |url=https://www.theguardian.com/commentisfree/article/2024/jun/15/hows-this-for-a-bombshell-the-us-must-make-ai-its-next-manhattan-project |access-date=2024-12-27 |work=The Observer |language=en-GB |issn=0029-7712}}</ref> Aschenbrenner wrote a 165-page essay named "Situational Awareness: The Decade Ahead". It contains sections that predict the emergence of AGI, imagines a path from AGI to ] describes four risks to humanity, outlines a way for humans to deal with superintelligent machines, and articulates the principles of an "AGI realism". He specifically warns that the United States needs to defend against the use AI technologies by countries such as Russia and China.<ref name=":0">{{Cite news |last=Naughton |first=John |date=2024-06-15 |title=How’s this for a bombshell – the US must make AI its next Manhattan Project |url=https://www.theguardian.com/commentisfree/article/2024/jun/15/hows-this-for-a-bombshell-the-us-must-make-ai-its-next-manhattan-project |access-date=2024-12-27 |work=The Observer |language=en-GB |issn=0029-7712}}</ref> His analysis is based on future capacity for AI systems to conduct AI research, what a Forbes writer referred to as "] and runaway superintelligence."<ref>{{Cite web |last=Toews |first=Rob |date=November 5, 2024 |title=AI That Can Invent AI Is Coming. Buckle Up. |url=https://www.forbes.com/sites/robtoews/2024/11/03/ai-that-can-invent-ai-is-coming-buckle-up/ |access-date=2024-12-27 |website=Forbes |language=en}}</ref>

== References ==
{{Reflist}}

{{authority control}}


{{DEFAULTSORT:Aschenbrenner, Leopold}}
] ]
] ]
]
]

Latest revision as of 21:28, 15 January 2025

AI researcher

Leopold Aschenbrenner is an artificial intelligence (AI) researcher. He was part of OpenAI's "Superalignment" team, before he was fired in April 2024 over an alleged information leak. He has published a popular essay called "Situational Awareness" about the emergence of artificial general intelligence and related security risks.

Biography

Aschenbrenner was born in Germany. He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. He also has some experience with the effective altruism movement.

OpenAI

Aschenbrenner joined OpenAI in 2023, on a project called "Superalignment" that researches how potential future superintelligences could be aligned with human values.

In April 2023, a hacker gained access to OpenAI's internal messaging system and stole information, an event that OpenAI kept private. Subsequently, Aschenbrenner wrote a memo to OpenAI's board of directors about the possibility of industrial espionage by Chinese and other foreign entities, arguing that OpenAI's security was insufficient. According to Aschenbrenner, this memo led to tensions between the board and the leadership about security, and he received a warning from human resources. OpenAI later fired him in April 2024 over an alleged information leak, which Aschenbrenner said was about a benign brainstorming document shared to three external researchers for feedback. OpenAI stated that the firing is unrelated to the security memo, whereas Aschenbrenner said that it was made explicit to him at the time that it was a major reason. The "Superalignment" team was dissolved one month later, with the departure from OpenAI of other researchers such as Ilya Sutskever and Jan Leike.

Investment

Aschenbrenner said that he started an investment firm with investors Patrick and John Collison, Daniel Gross, and Nat Friedman.

Situational Awareness essay

Aschenbrenner wrote a 165-page essay named "Situational Awareness: The Decade Ahead". It contains sections that predict the emergence of AGI, imagines a path from AGI to superintelligence describes four risks to humanity, outlines a way for humans to deal with superintelligent machines, and articulates the principles of an "AGI realism". He specifically warns that the United States needs to defend against the use AI technologies by countries such as Russia and China. His analysis is based on future capacity for AI systems to conduct AI research, what a Forbes writer referred to as "recursive self-improvement and runaway superintelligence."

References

  1. Allen, Mike (2024-06-23). "10 takeaways: AI from now to 2034". Axios. Retrieved 2024-12-27.
  2. "Ex-OpenAI employee writes AI essay: War with China, resources and robots". heise online. 2024-07-02. Retrieved 2024-12-28.
  3. Metz, Cade (2024-07-04). "A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too". The New York Times. Archived from the original on 2024-12-26. Retrieved 2024-12-27.
  4. Altchek, Ana. "Ex-OpenAI employee speaks out about why he was fired: 'I ruffled some feathers'". Business Insider. Retrieved 2024-12-28.
  5. "Ex-OpenAI Employee Reveals Reason For Getting Fired, "Security Memo Was..."". NDTV. June 6, 2024. Retrieved 2024-12-28.
  6. Field, Hayden (2024-05-17). "OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it". CNBC. Retrieved 2024-12-28.
  7. "Post Script: Patrick Collison the Swiss dictator; McKillen-Bono whiskey lands ex-Bank of Ireland governor". www.businesspost.ie. Retrieved 2024-12-27.
  8. ^ Naughton, John (2024-06-15). "How's this for a bombshell – the US must make AI its next Manhattan Project". The Observer. ISSN 0029-7712. Retrieved 2024-12-27.
  9. Toews, Rob (November 5, 2024). "AI That Can Invent AI Is Coming. Buckle Up". Forbes. Retrieved 2024-12-27.
Categories:
Leopold Aschenbrenner: Difference between revisions Add topic