Revision as of 17:41, 27 December 2024 editProfGray (talk | contribs)Extended confirmed users3,062 edits →Situational Awareness essay: national securityTag: Visual edit← Previous edit | Revision as of 17:49, 27 December 2024 edit undoProfGray (talk | contribs)Extended confirmed users3,062 edits →Biography: cites and EATag: Visual editNext edit → | ||
Line 2: | Line 2: | ||
== Biography == | == Biography == | ||
Aschenbrenner was born in Germany. He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 paper with Philip Trammell. As of 2024, he lived in the Bay area. He worked on a team at OpenAI that handled ] with future human objectives. | Aschenbrenner was born in Germany. He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. He also has some experience with the ] movement.<ref>{{Cite web |last=Allen |first=Mike |date=2024-06-23 |title=10 takeaways: AI from now to 2034 |url=https://www.axios.com/2024/06/23/leopold-aschenbrenner-ai-future-silicon-valley |access-date=2024-12-27 |website=Axios |language=en}}</ref> As of 2024, he lived in the Bay area. He worked on a team at OpenAI that handled ] with future human objectives. | ||
In 2023, Aschenbrenner wrote to the OpenAI board of directors about the possibility of industrial espionage by Chinese and other non-U.S. entities. He provided confidential company information to outsiders and was dismissed by the company. He alleged that he was fired for political reasons. OpenAI stated that he was not fired because of the security issues he had emphasized and disputed his opinions about security risks.<ref>{{Cite news |last=Metz |first=Cade |date=2024-07-04 |title=A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too |url=https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-url=http://web.archive.org/web/20241226102437/https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-date=2024-12-26 |access-date=2024-12-27 |language=en}}</ref> | In 2023, Aschenbrenner wrote to the OpenAI board of directors about the possibility of industrial espionage by Chinese and other non-U.S. entities. He provided confidential company information to outsiders and was dismissed by the company. He alleged that he was fired for political reasons. OpenAI stated that he was not fired because of the security issues he had emphasized and disputed his opinions about security risks.<ref>{{Cite news |last=Metz |first=Cade |date=2024-07-04 |title=A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too |url=https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-url=http://web.archive.org/web/20241226102437/https://www.nytimes.com/2024/07/04/technology/openai-hack.html |archive-date=2024-12-26 |access-date=2024-12-27 |work=] |language=en}}</ref> | ||
Aschenbrenner said that he started an investment firm with investors ] and ], ], and ]. | Aschenbrenner said that he started an investment firm with investors ] and ], ], and ].<ref>{{Cite web |title=Post Script: Patrick Collison the Swiss dictator; McKillen-Bono whiskey lands ex-Bank of Ireland governor |url=https://www.businesspost.ie/article/post-script-patrick-collison-the-swiss-dictator-mckillen-bono-whiskey-lands-ex-bank-of-ireland-gov/ |access-date=2024-12-27 |website=www.businesspost.ie |language=en-US}}</ref><ref name=":0" /> | ||
== Situational Awareness essay == | == Situational Awareness essay == | ||
He wrote a 165-page essay that has been seen as a manifesto about the risks of ], "Situational Awareness: The Decade Ahead." His approach has been described as optimistic determinism. "Situational Awareness" has sections that predict the emergence of AGI, imagine a path from AGI to "superintelligence," describe four risks to humanity, outline a way for humans to deal with superintelligent machines, and articulate the principles of an "AGI realism." He specifically warns that the United States needs to defend against the use AI technologies by countries such as Russia and China.<ref>{{Cite news |last=Naughton |first=John |date=2024-06-15 |title=How’s this for a bombshell – the US must make AI its next Manhattan Project |url=https://www.theguardian.com/commentisfree/article/2024/jun/15/hows-this-for-a-bombshell-the-us-must-make-ai-its-next-manhattan-project |access-date=2024-12-27 |work=The Observer |language=en-GB |issn=0029-7712}}</ref> | He wrote a 165-page essay that has been seen as a manifesto about the risks of ], "Situational Awareness: The Decade Ahead." His approach has been described as optimistic determinism. "Situational Awareness" has sections that predict the emergence of AGI, imagine a path from AGI to "superintelligence," describe four risks to humanity, outline a way for humans to deal with superintelligent machines, and articulate the principles of an "AGI realism." He specifically warns that the United States needs to defend against the use AI technologies by countries such as Russia and China.<ref name=":0">{{Cite news |last=Naughton |first=John |date=2024-06-15 |title=How’s this for a bombshell – the US must make AI its next Manhattan Project |url=https://www.theguardian.com/commentisfree/article/2024/jun/15/hows-this-for-a-bombshell-the-us-must-make-ai-its-next-manhattan-project |access-date=2024-12-27 |work=The Observer |language=en-GB |issn=0029-7712}}</ref> |
Revision as of 17:49, 27 December 2024
Leopold Aschenbrenner is an Artificial intelligence (AI) researcher who was fired by OpenAI, the firm behind ChatGPT, and subsequently published a popular essay about the eventual security risks from future AI technologies.
Biography
Aschenbrenner was born in Germany. He did research for the Global Priorities Initiative at Oxford University and co-authored a 2024 working paper with Philip Trammell of Oxford. He also has some experience with the effective altruism movement. As of 2024, he lived in the Bay area. He worked on a team at OpenAI that handled AI alignment with future human objectives.
In 2023, Aschenbrenner wrote to the OpenAI board of directors about the possibility of industrial espionage by Chinese and other non-U.S. entities. He provided confidential company information to outsiders and was dismissed by the company. He alleged that he was fired for political reasons. OpenAI stated that he was not fired because of the security issues he had emphasized and disputed his opinions about security risks.
Aschenbrenner said that he started an investment firm with investors Patrick and John Collison, Daniel Gross, and Nat Friedman.
Situational Awareness essay
He wrote a 165-page essay that has been seen as a manifesto about the risks of Artificial general intelligence, "Situational Awareness: The Decade Ahead." His approach has been described as optimistic determinism. "Situational Awareness" has sections that predict the emergence of AGI, imagine a path from AGI to "superintelligence," describe four risks to humanity, outline a way for humans to deal with superintelligent machines, and articulate the principles of an "AGI realism." He specifically warns that the United States needs to defend against the use AI technologies by countries such as Russia and China.
- Allen, Mike (2024-06-23). "10 takeaways: AI from now to 2034". Axios. Retrieved 2024-12-27.
- Metz, Cade (2024-07-04). "A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too". The New York Times. Archived from the original on 2024-12-26. Retrieved 2024-12-27.
- "Post Script: Patrick Collison the Swiss dictator; McKillen-Bono whiskey lands ex-Bank of Ireland governor". www.businesspost.ie. Retrieved 2024-12-27.
- ^ Naughton, John (2024-06-15). "How's this for a bombshell – the US must make AI its next Manhattan Project". The Observer. ISSN 0029-7712. Retrieved 2024-12-27.