Misplaced Pages

Artificial general intelligence: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 09:43, 22 June 2014 editFlyer22 Frozen (talk | contribs)365,630 editsm Reverted edits by AlvoMaia (talk) to last version by Flyer22← Previous edit Latest revision as of 07:04, 15 January 2025 edit undoClueBot NG (talk | contribs)Bots, Pending changes reviewers, Rollbackers6,439,660 editsm Reverting possible vandalism by 114.122.6.33 to version by Discospinster. Report False Positive? Thanks, ClueBot NG. (4369258) (Bot)Tag: Rollback 
Line 1: Line 1:
{{Short description|Type of AI with wide-ranging abilities}}
'''Artificial general intelligence''' ('''AGI''') is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of ] research and an important topic for ] writers and ]s. Artificial general intelligence is also referred to as "'''strong AI'''",<ref name="K">{{Harv|Kurzweil|2005|p=260}} or see where he defines strong AI as "machine intelligence with the full range of human intelligence."</ref> "'''full AI'''"<ref></ref> or as the ability to perform "general intelligent action".<ref>{{Harvnb|Newell|Simon|1976}}. This the term they use for "human-level" intelligence in the ] hypothesis.</ref> AGI is associated with traits such as ], ], ], and ] observed in living beings.{{citation needed|date=January 2014}}
{{Distinguish|Generative artificial intelligence|Artificial superintelligence}}
{{Use British English|date = March 2019}}
{{Use dmy dates|date=December 2019}}
{{Artificial intelligence|Major goals}}
'''Artificial general intelligence''' ('''AGI''') is a type of ] (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with ], which is limited to specific tasks.<ref>{{Cite web |last=Krishna |first=Sri |date=2023-02-09 |title=What is artificial narrow intelligence (ANI)? |url=https://venturebeat.com/ai/what-is-artificial-narrow-intelligence-ani/ |access-date=2024-03-01 |website=VentureBeat |language=en-US |quote="ANI is designed to perform a single task."}}</ref> ] (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of ].


Creating AGI is a primary goal of AI research and of companies such as ]<ref name="OpenAI Charter">{{Cite web |title=OpenAI Charter |url=https://openai.com/charter |access-date=2023-04-06 |website=OpenAI |language=en-US |quote="Our mission is to ensure that artificial general intelligence benefits all of humanity."}}</ref> and ].<ref>{{Cite web |last=Heath |first=Alex |date=2024-01-18 |title=Mark Zuckerberg's new goal is creating artificial general intelligence |url=https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-meta-agi-reorg-interview |access-date=2024-06-13 |website=The Verge |language=en |quote="Our vision is to build AI that is better than human-level at all of the human senses."}}</ref> A 2020 survey identified 72 active AGI ] projects across 37 countries.<ref name="baum">{{Cite report |url=https://gcrinstitute.org/papers/055_agi-2020.pdf |title=A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy |last=Baum |first=Seth D. |date=2020 |publisher=Global Catastrophic Risk Institute |quote="72 AGI R&D projects were identified as being active in 2020." |access-date=28 November 2024}}</ref>
Some references emphasize a distinction between strong AI and "applied AI"<ref>Encyclopædia Britannica or Jack Copeland on AlanTuring.net</ref> (also called "narrow AI"<ref name=K/> or "]"<ref></ref>): the use of software to study or accomplish specific ] or ] tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.


The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here.<ref name=":22">{{Cite web |title=AI timelines: What do experts in artificial intelligence expect for the future? |url=https://ourworldindata.org/ai-timelines |access-date=2023-04-06 |website=Our World in Data}}</ref><ref>{{Cite web |last=Metz |first=Cade |date=15 May 2023 |title=Some Researchers Say A.I. Is Already Here, Stirring Debate in Tech Circles |website=] |url=https://www.nytimes.com/2023/05/15/technology/ai-artificial-general-intelligence-debate.html |access-date=2023-05-18}}</ref> Notable AI researcher ] has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.<ref>{{Cite news |date=1 May 2023 |title=AI pioneer Geoffrey Hinton quits Google and warns of danger ahead |url=https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html |access-date=2023-05-02 |work=The New York Times |quote="It is hard to see how you can prevent the bad actors from using it for bad things."}}</ref>
Strong AI (as defined above) should not be confused with ]'s ]. Strong AI refers to the amount of intelligence a computer can display, whereas the strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a ] and ].


There is debate on the exact definition of AGI and regarding whether modern ]s (LLMs) such as ] are early forms of AGI.<ref>{{Cite journal |last1=Bubeck |first1=Sébastien |last2=Chandrasekaran |first2=Varun |last3=Eldan |first3=Ronen |last4=Gehrke |first4=Johannes |last5=Horvitz |first5=Eric |date=2023 |title=Sparks of Artificial General Intelligence: Early experiments with GPT-4 |journal=arXiv preprint |arxiv=2303.12712 |quote="GPT-4 shows sparks of AGI."}}</ref> AGI is a common topic in ] and ].<ref>{{Cite book |last=Butler |first=Octavia E. |title=Parable of the Sower |publisher=Grand Central Publishing |date=1993 |isbn=978-0-4466-7550-5 |quote="All that you touch you change. All that you change changes you."}}</ref><ref>{{Cite book |last=Vinge |first=Vernor |title=A Fire Upon the Deep |publisher=Tor Books |date=1992 |isbn=978-0-8125-1528-2 |quote="The Singularity is coming."}}</ref>
==Requirements==
{{main|Cognitive science}}


Contention exists over whether AGI represents an ].<ref name="NYT-202306302">{{Cite news |last=Morozov |first=Evgeny |date=June 30, 2023 |title=The True Threat of Artificial Intelligence |url=https://www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html |work=The New York Times |quote="The real threat is not AI itself but the way we deploy it."}}</ref><ref>{{Cite news |date=2023-03-23 |title=Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks |url=https://www.abc.net.au/news/2023-03-24/what-is-agi-artificial-general-intelligence-ai-experts-risks/102035132 |access-date=2023-04-06 |work=ABC News |language=en-AU |quote="AGI could pose existential risks to humanity."}}</ref><ref>{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: Paths, Dangers, Strategies |publisher=Oxford University Press |date=2014 |isbn=978-0-1996-7811-2 |quote="The first superintelligence will be the last invention that humanity needs to make."}}</ref> Many experts on AI ] that mitigating the risk of human extinction posed by AGI should be a global priority.<ref>{{Cite news |last=Roose |first=Kevin |date=May 30, 2023 |title=A.I. Poses 'Risk of Extinction,' Industry Leaders Warn |url=https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html |work=The New York Times |quote="Mitigating the risk of extinction from AI should be a global priority."}}</ref><ref>{{Cite web |title=Statement on AI Risk |url=https://www.safe.ai/statement-on-ai-risk |access-date=2024-03-01 |website=Center for AI Safety |quote="AI experts warn of risk of extinction from AI."}}</ref> Others find the development of AGI to be too remote to present such a risk.<ref>{{Cite news |last=Mitchell |first=Melanie |date=May 30, 2023 |title=Are AI's Doomsday Scenarios Worth Taking Seriously? |url=https://www.nytimes.com/2023/05/30/opinion/ai-risk.html |work=The New York Times |quote="We are far from creating machines that can outthink us in general ways."}}</ref><ref>{{Cite web |last=LeCun |first=Yann |date=June 2023 |title=AGI does not present an existential risk |url=https://yosinski.medium.com/agi-does-not-present-an-existential-risk-b55b6e03c0de |website=Medium |quote="There is no reason to fear AI as an existential threat."}}</ref>
Many different definitions of ] have been proposed (such as being able to pass the ]) but there is to date no definition that satisfies everyone.<ref>AI founder ] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." {{cite web|url=http://www-formal.stanford.edu/jmc/whatisai/node1.html |title=Basic Questions| last=McCarthy |first=John| authorlink=]| publisher=]| year=2007}} (For a discussion of some definitions of intelligence used by ] researchers, see ].)
</ref> However, there ''is'' wide agreement among artificial intelligence researchers that intelligence is required to do the following:<ref>
This list of intelligent traits is based on the topics covered by major AI textbooks, including:
{{Harvnb|Russell|Norvig|2003}},
{{Harvnb|Luger|Stubblefield|2004}},
{{Harvnb|Poole|Mackworth|Goebel|1998}} and
{{Harvnb|Nilsson|1998}}.
</ref>
* ], use strategy, solve puzzles, and make judgments under ];
* ], including ];
* ];
* ];
* communicate in ];
* and integrate all these skills towards common goals.


== Terminology ==
Other important capabilities include the ability to ] (e.g. ]) and the ability to act (e.g. ]) in the world where intelligent behaviour is to be observed.<ref>Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). ISBN 0-262-16239-3</ref> This would include an ability to detect and respond to ].<ref>White, R. W. (1959). Motivation reconsidered: The concept of competence. Psychological Review, 66, 297-333</ref> Many interdisciplinary approaches to intelligence (e.g. ], ] and ]) tend to emphasise the need to consider additional traits such as ] (taken as the ability to form mental images and concepts that were not programmed in)<ref>{{Harvnb|Johnson|1987}}</ref> and ].<ref>deCharms, R. (1968). Personal causation. New York: Academic Press.</ref>
Computer based systems that exhibit many of these capabilities do exist (e.g. see ], ], ], ], ], ]), but not yet at human levels.


AGI is also known as strong AI,{{Sfn|Kurzweil|2005|p=260}}<ref name="Kurzweil 2005-08-05">{{Citation |last=Kurzweil |first=Ray |title=Long Live AI |date=5 August 2005 |work=] |url=https://www.forbes.com/home/free_forbes/2005/0815/030.html |url-status=dead |archiveurl=https://web.archive.org/web/20050814000557/https://www.forbes.com/home/free_forbes/2005/0815/030.html |archivedate=2005-08-14 |ref=none}}: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."</ref> full AI,<ref>{{Cite web |title=The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013 |url=http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |url-status=live |archive-url=https://web.archive.org/web/20140226123940/http://tedxtalks.ted.com/video/The-Age-of-Artificial-Intellige |archive-date=26 February 2014 |access-date=22 February 2014}}</ref> human-level AI,<ref name=":22" /> human-level intelligent AI, or general intelligent action.{{Sfn|Newell|Simon|1976|ps=, This is the term they use for "human-level" intelligence in the ] hypothesis.}}
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in ] and the ]:
* ]: To have ] and ].<ref>Note that ] is difficult to define. A popular definition, due to ], is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See {{Harv|Nagel|1974}}</ref>
* ]: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.
* ]: The ability to "feel" perceptions or emotions subjectively.
* ]: The capacity for wisdom.
These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the ]. Also, ], among others, argues a machine with these traits may be a threat to human life or dignity.<ref>{{cite journal| title=Why the future doesn't need us | last=Joy | first=Bill |author-link=Bill Joy | magazine=Wired |date=April 2000 }}</ref> It remains to be shown whether any of these traits are ] for strong AI. The role of ] is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the ], would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, ] from a fully intelligent machine, or that it becomes natural to ''ascribe'' these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.


Some academic sources reserve the term "strong AI" for computer programs that experience ] or ].{{Efn|name="Searle's Strong AI"|See below for the origin of the term "strong AI", and see the academic definition of "]" and weak AI in the article ].}} In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities.<ref>{{Cite web |title=The Open University on Strong and Weak AI |url=http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |url-status=dead |archive-url=https://web.archive.org/web/20090925043908/http://www.open2.net/nextbigthing/ai/ai_in_depth/in_depth.htm |archive-date=25 September 2009 |access-date=8 October 2007}}</ref><ref name="Kurzweil 2005-08-05"/> Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.{{Efn|name="Searle's Strong AI"}}
===Operational definitions of AGI===
Scientists have varying ideas of what kinds of tests a superintelligent machine needs to pass in order to be considered an operation definition of artificial general intelligence. A few of these scientists include the late ], ], and ]. A few of the tests they have proposed are:


Related concepts include artificial ] and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans,<ref>{{Cite web |title=What is artificial superintelligence (ASI)? {{!}} Definition from TechTarget |url=https://www.techtarget.com/searchenterpriseai/definition/artificial-superintelligence-ASI |access-date=2023-10-08 |website=Enterprise AI |language=en}}</ref> while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.<ref>{{Cite web |title=Artificial intelligence is transforming our world – it is on all of us to make sure that it goes well |url=https://ourworldindata.org/ai-impact |access-date=2023-10-08 |website=Our World in Data}}</ref>
1. The Turing Test (''Turing'')
: See ].


A framework for classifying AGI in levels was proposed in 2023 by ] researchers. They define five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ] or ] to be instances of emerging AGI.<ref>{{Cite news |last=Dickson |first=Ben |date=November 16, 2023 |title=Here is how far we are to achieving AGI, according to DeepMind |url=https://venturebeat.com/ai/here-is-how-far-we-are-to-achieving-agi-according-to-deepmind/ |work=VentureBeat}}</ref>
2. The Coffee Test (''Goertzel'')
: A machine is given the task of going into an average American home and figuring out how to make coffee. It has to find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.


== Characteristics ==
3. The Robot College Student Test (''Goertzel'')
{{Main|Artificial intelligence}}
: A machine is given the task of enrolling in a university, taking and passing the same classes that humans would, and obtaining a degree.

Various popular definitions of ] have been proposed. One of the leading proposals is the ]. However, there are other well-known definitions, and some researchers disagree with the more popular approaches. {{Efn|AI founder ] writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent."<ref>{{Cite web |last=McCarthy |first=John |author-link=John McCarthy (computer scientist) |date=2007a |title=Basic Questions |url=http://www-formal.stanford.edu/jmc/whatisai/node1.html |url-status=live |archive-url=https://web.archive.org/web/20071026100601/http://www-formal.stanford.edu/jmc/whatisai/node1.html |archive-date=26 October 2007 |access-date=6 December 2007 |publisher=]}}</ref> (For a discussion of some definitions of intelligence used by ] researchers, see ].)}}

=== Intelligence traits ===
However, researchers generally hold that intelligence is required to do all of the following:<ref name=":12">This list of intelligent traits is based on the topics covered by major AI textbooks, including: {{Harvnb|Russell|Norvig|2003}}, {{Harvnb|Luger|Stubblefield|2004}}, {{Harvnb|Poole|Mackworth|Goebel|1998}} and {{Harvnb|Nilsson|1998}}.</ref>
* ], use strategy, solve puzzles, and make judgments under ]
* ], including ]
* ]
* ]
* communicate in ]
* if necessary, ] in completion of any given goal
Many ] approaches (e.g. ], ], and ]) consider additional traits such as ] (the ability to form novel mental images and concepts)<ref>{{Harvnb|Johnson|1987}}</ref> and ].<ref>de Charms, R. (1968). Personal causation. New York: Academic Press.</ref>

Computer-based systems that exhibit many of these capabilities exist (e.g. see ], ], ], ], ], ]). There is debate about whether modern AI systems possess them to an adequate degree.

=== Physical traits ===
Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:<ref name=":13">Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-2621-6239-3}}</ref>
* the ability to ] (e.g. ], hear, etc.), and
* the ability to act (e.g. ], change location to explore, etc.)
This includes the ability to detect and respond to ].<ref>{{Cite journal |last=White |first=R. W. |date=1959 |title=Motivation reconsidered: The concept of competence |journal=Psychological Review |volume=66 |issue=5 |pages=297–333 |doi=10.1037/h0040934 |pmid=13844397 |s2cid=37385966}}</ref>

Although the ability to sense (e.g. ], hear, etc.) and the ability to act (e.g. ], change location to explore, etc.) can be desirable for some intelligent systems,<ref name=":13">Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). {{ISBN|0-2621-6239-3}}</ref> these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses. This interpretation aligns with the understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional "eyes and ears".<ref>{{Cite journal |last=White |first=R. W. |date=1959 |title=Motivation reconsidered: The concept of competence |journal=Psychological Review |volume=66 |issue=5 |pages=297–333 |doi=10.1037/h0040934 |pmid=13844397 |s2cid=37385966}}</ref>

===Tests for human-level AGI{{Anchor|Tests_for_confirming_human-level_AGI}}===
Several tests meant to confirm human-level AGI have been considered, including:<ref>{{Cite web |last=Muehlhauser |first=Luke |date=11 August 2013 |title=What is AGI? |url=http://intelligence.org/2013/08/11/what-is-agi/ |url-status=live |archive-url=https://web.archive.org/web/20140425115445/http://intelligence.org/2013/08/11/what-is-agi/ |archive-date=25 April 2014 |access-date=1 May 2014 |publisher=Machine Intelligence Research Institute}}</ref><ref>{{Cite web |date=13 July 2019 |title=What is Artificial General Intelligence (AGI)? {{!}} 4 Tests For Ensuring Artificial General Intelligence |url=https://www.talkyblog.com/artificial_general_intelligence_agi/ |url-status=live |archive-url=https://web.archive.org/web/20190717071152/https://www.talkyblog.com/artificial_general_intelligence_agi/ |archive-date=17 July 2019 |access-date=17 July 2019 |website=Talky Blog |language=en-US}}</ref>

;] (])
:].<ref>{{Cite web |last1=Kirk-Giannini |first1=Cameron Domenico |last2=Goldstein |first2=Simon |date=2023-10-16 |title=AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does? |url=https://theconversation.com/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does-214721 |access-date=2024-09-22 |website=The Conversation |language=en-US}}</ref>]]Proposed by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine.{{Sfn|Turing|1950}}

: Turing described the test as follows:
{{Quote|text=The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be expert about machines, must be taken in by the pretence.<ref name="Turing1952">{{Cite book |last=Turing |first=Alan |title=Can Automatic Calculating Machines Be Said To Think? |publisher=Oxford University Press |date=1952 |isbn=978-0-1982-5079-1 |editor-last=B. Jack Copeland |editor-link=Jack Copeland |publication-place=Oxford |pages=487–506}}</ref>}}

: In 2014, a chatbot named ], designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant skepticism from the AI research community, who questioned the test's implementation and its relevance to AGI.<ref>{{Cite news |date=2014-06-09 |title=Eugene Goostman is a real boy – the Turing Test says so |url=https://www.theguardian.com/technology/shortcuts/2014/jun/09/eugene-goostman-turing-test-computer-program |access-date=2024-03-03 |work=The Guardian |language=en-GB |issn=0261-3077}}</ref><ref>{{Cite web |date=2014-06-09 |title=Scientists dispute whether computer 'Eugene Goostman' passed Turing test |url=https://www.bbc.com/news/technology-27762088 |access-date=2024-03-03 |website=BBC News}}</ref>

: More recently, a 2024 study suggested that ] was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%).<ref>{{Cite arXiv |last1=Jones |first1=Cameron R. |last2=Bergen |first2=Benjamin K. |title=People cannot distinguish GPT-4 from a human in a Turing test |eprint=2405.08007 |class=cs.HC |date=9 May 2024 }}</ref>

;The Robot College Student Test (])
: A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes.<ref>{{Cite web |last=Varanasi |first=Lakshmi |date=21 March 2023 |title=AI models like ChatGPT and GPT-4 are acing everything from the bar exam to AP Biology. Here's a list of difficult exams both AI versions have passed. |url=https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1 |access-date=30 May 2023 |website=]}}</ref>

;The Employment Test (])
: A machine performs an economically important job at least as well as humans in the same job. AIs are now replacing humans in many roles as varied as fast food and marketing.<ref>{{Cite web |last=Naysmith |first=Caleb |date=7 February 2023 |title=6 Jobs Artificial Intelligence Is Already Replacing and How Investors Can Capitalize on It |url=https://www.yahoo.com/now/6-jobs-artificial-intelligence-already-150339825.html |access-date=30 May 2023}}</ref>

;The Ikea test (])
: Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly.<ref>{{Cite web |last=Turk |first=Victoria |date=2015-01-28 |title=The Plan to Replace the Turing Test with a 'Turing Olympics' |url=https://www.vice.com/en/article/vvbqma/the-plan-to-replace-the-turing-test-with-a-turing-olympics |access-date=2024-03-03 |website=Vice |language=en}}</ref>

;The Coffee Test (])
: A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons.<ref>{{Cite web |last=Gopani |first=Avi |date=2022-05-25 |title=Turing Test is unreliable. The Winograd Schema is obsolete. Coffee is the answer. |url=https://analyticsindiamag.com/turing-test-is-unreliable-the-winograd-schema-is-obsolete-coffee-is-the-answer/ |access-date=2024-03-03 |website=Analytics India Magazine |language=en-US}}</ref> This has not yet been completed.

;The Modern Turing Test ('']'')
: An AI model is given $100,000 and has to obtain $1&nbsp;million.<ref>{{Cite web |last=Bhaimiya |first=Sawdah |date=June 20, 2023 |title=DeepMind's co-founder suggested testing an AI chatbot's ability to turn $100,000 into $1 million to measure human-like intelligence |url=https://www.businessinsider.com/deepmind-co-founder-suggests-new-turing-test-ai-chatbots-report-2023-6 |access-date=2024-03-03 |website=Business Insider |language=en-US}}</ref><ref>{{Cite web |last=Suleyman |first=Mustafa |date=July 14, 2023 |title=Mustafa Suleyman: My new Turing test would see if AI can make $1 million |url=https://www.technologyreview.com/2023/07/14/1076296/mustafa-suleyman-my-new-turing-test-would-see-if-ai-can-make-1-million/ |access-date=2024-03-03 |website=MIT Technology Review |language=en}}</ref>

===AI-complete problems===
{{Main|AI-complete}}

A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm.<ref name="Shapiro92">{{Cite book |last=Shapiro |first=Stuart C. |title=Encyclopedia of Artificial Intelligence |publisher=John Wiley |date=1992 |editor-last=Stuart C. Shapiro |edition=Second |location=New York |pages=54–57 |chapter=Artificial Intelligence |chapter-url=http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |archive-url=https://web.archive.org/web/20160201014644/http://www.cse.buffalo.edu/~shapiro/Papers/ai.pdf |archive-date=1 February 2016 |url-status=live}} (Section 4 is on "AI-Complete Tasks".)</ref>

There are many problems that have been conjectured to require general intelligence to solve as well as humans. Examples include ], ], and dealing with unexpected circumstances while solving any real-world problem.<ref>{{Cite journal |last=Yampolskiy |first=Roman V. |date=2012 |title=Turing Test as a Defining Feature of AI-Completeness |url=http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |url-status=live |journal=Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) |pages=3–17 |archive-url=https://web.archive.org/web/20130522094547/http://cecs.louisville.edu/ry/TuringTestasaDefiningFeature04270003.pdf |archive-date=22 May 2013 |editor=Xin-She Yang}}</ref> Even a specific task like ] requires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (]). All of these problems need to be solved simultaneously in order to reach human-level machine performance.

However, many of these tasks can now be performed by modern large language models. According to ]'s 2024 AI index, AI has reached human-level performance on many ] for reading comprehension and visual reasoning.<ref>{{Cite web |date=2024-04-15 |title=AI Index: State of AI in 13 Charts |url=https://hai.stanford.edu/news/ai-index-state-ai-13-charts |access-date=2024-05-27 |website=Stanford University Human-Centered Artificial Intelligence |language=en}}</ref>


==History==
4. The Employment Test (''Nilsson'')
: A machine is given the task of working an economically important job, and must perform as well or better than the level that humans perform at in the same job.


===Classical AI===
These are a few of tests that cover the a variety of qualities that machine needs to have to be considered AGI, including the ability to reason and learn, as well as being conscious and self-aware.<ref>{{cite web|last=Muehlhauser|first=Luke|title=What is AGI?|url=http://intelligence.org/2013/08/11/what-is-agi/|publisher=Machine Intelligence Research Institute|accessdate=1 May 2014}}</ref>
{{Main|History of artificial intelligence|Symbolic artificial intelligence}}
Modern AI research began in the mid-1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades.<ref>{{Cite web |last=Kaplan |first=Andreas |date=2022 |title=Artificial Intelligence, Business and Civilization – Our Fate Made in Machines |url=https://www.routledge.com/Artificial-Intelligence-Business-and-Civilization-Our-Fate-Made-in-Machines/Kaplan/p/book/9781032155319 |url-status=live |archive-url=https://web.archive.org/web/20220506103920/https://www.routledge.com/Artificial-Intelligence-Business-and-Civilization-Our-Fate-Made-in-Machines/Kaplan/p/book/9781032155319 |archive-date=6 May 2022 |access-date=12 March 2022}}</ref> AI pioneer ] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref>


Their predictions were the inspiration for ] and ]'s character ], who embodied what AI researchers believed they could create by the year 2001. AI pioneer ] was a consultant<ref>{{Cite web |title=Scientist on the Set: An Interview with Marvin Minsky |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |url-status=live |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |access-date=5 April 2008}}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref>
==Mainstream AI research==


Several ], such as ]'s ] project (that began in 1984), and ]'s ] project, were directed at AGI.
=== History of mainstream research into strong AI ===
{{Main|History of artificial intelligence}}
Modern AI research began in the mid 1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that strong AI was possible and that it would exist in just a few decades. As AI pioneer ] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for ] and ]'s character ], who accurately embodied what AI researchers believed they could create by the year 2001. Of note is the fact that AI pioneer ] was a consultant<ref></ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation...the problem of creating 'artificial intelligence' will substantially be solved,",<ref>] to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> although Minsky states that he was misquoted.{{Citation needed|date=June 2011}}


However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. The agencies that funded AI became skeptical of strong AI and put researchers under increasing pressure to produce useful technology, or "applied AI".<ref>The ] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. ({{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}) In the U.S., ] became determined to fund only "mission-oriented direct research, rather than basic undirected research". See {{Harv|NRC|1999}} under "Shift to Applied Research Increases Investment". See also {{Harv|Crevier|1993|pp=115–117}} and {{Harv|Russell|Norvig|2003|pp=21–22}}</ref> As the 1980s began, Japan's ] project revived interest in strong AI, setting out a ten year timeline that included strong AI goals like "carry on a casual conversation".<ref>{{harvnb|Crevier|1993|pp=211}}, {{harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of ], both industry and government pumped money back into the field.<ref>{{Harvnb|Crevier| 1993|pp=161–162,197–203,240}}; {{harvnb|Russell|Norvig|2003|p=25}}; {{harvnb|NRC|1999|loc=under "Shift to Applied Research Increases Investment"}}</ref> However, the market for AI spectacularly collapsed in the late 1980s and the goals of the fifth generation computer project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who had predicted the imminent arrival of strong AI had been shown to be fundamentally mistaken about what they could accomplish. By the 1990s, AI researchers had gained a reputation for making promises they could not keep. AI researchers became reluctant to make any kind of prediction at all<ref>As AI founder ] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." {{cite web| url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html | title=Reply to Lighthill | last=McCarthy | first=John | authorlink=John McCarthy (computer scientist) | publisher=Stanford University |year=2000}} However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".{{Efn|The ] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England.<ref>{{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}</ref> In the U.S., ] became determined to fund only "mission-oriented direct research, rather than basic undirected research".{{Sfn|NRC|1999|loc="Shift to Applied Research Increases Investment"}}<ref>{{Harvnb|Crevier|1993|pp=115–117}}; {{Harvnb|Russell|Norvig|2003|pp=21–22}}.</ref>}} In the early 1980s, Japan's ] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{Harvnb|Crevier|1993|p=211}}, {{Harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of ], both industry and government pumped money into the field.{{Sfn|NRC|1999|loc="Shift to Applied Research Increases Investment"}}<ref>{{Harvnb|Crevier|1993|pp=161–162,197–203,240}}; {{Harvnb|Russell|Norvig|2003|p=25}}.</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all{{Efn|As AI founder ] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case."<ref>{{Cite web |last=McCarthy |first=John |author-link=John McCarthy (computer scientist) |date=2000 |title=Reply to Lighthill |url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html |url-status=live |archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html |archive-date=30 September 2008 |access-date=29 September 2007 |publisher=Stanford University}}</ref>}} and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer".<ref>{{Cite news |last=Markoff |first=John |date=14 October 2005 |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5%2FNUs1cQCQ |url-status=live |archive-url=https://web.archive.org/web/20230202181023/https://www.nytimes.com/2005/10/14/technology/behind-artificial-intelligence-a-squadron-of-bright-real-people.html |archive-date=2 February 2023 |access-date=18 February 2017 |work=The New York Times |quote=At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.}}</ref>
</ref> and avoid any mention of "human level" artificial intelligence, for fear of being labeled a "wild-eyed dreamer."<ref>"At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."{{cite news |first=John |last=Markoff |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=http://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5/NUs1cQCQ |work= |publisher=The New York Times |date=2005-10-14 |accessdate=2007-07-30 }}</ref>


===Current mainstream AI research=== ===Narrow AI research===
{{Main|Artificial intelligence}} {{Main|Artificial intelligence}}


In the 1990s and early 21st century, mainstream AI has achieved a far higher degree of commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as ], ] or ].<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" applications are now used extensively throughout the technology industry and research in this vein is very heavily funded in both academia and industry. In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as ] and ]s.<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. {{As of|2018}}, development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.<ref>{{Cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |url-status=live |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |access-date=7 May 2019 |publisher=Gartner Reports}}</ref>


Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various subproblems using an integrated ], ] or ]. ] wrote in 1988 "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the ] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical ] is driven uniting the two efforts."<ref>{{Harvnb|Moravec|1988|p=20}}</ref> However, it should be noted that much contention has existed in AI research, even with regards to the fundamental philosophies informing this field; for example, Harnad, S. from Princeton stated in the conclusion of his 1990 paper on the Symbol Grounding Hypothesis that "The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) -- nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)."<ref>Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.</ref> At the turn of the century, many mainstream AI researchers<ref name=":4"/> hoped that strong AI could be developed by combining programs that solve various sub-problems. ] wrote in 1988: <blockquote>I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and the ] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical ] is driven uniting the two efforts.<ref name=":4">{{Harvnb|Moravec|1988|p=20}}</ref></blockquote>


However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the ] by stating: <blockquote>The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).<ref>{{Cite journal |last=Harnad |first=S. |date=1990 |title=The Symbol Grounding Problem |journal=Physica D |volume=42 |issue=1–3 |pages=335–346 |arxiv=cs/9906002 |bibcode=1990PhyD...42..335H |doi=10.1016/0167-2789(90)90087-6 |s2cid=3204300}}</ref></blockquote>
==Artificial general intelligence research==
Artificial general intelligence{{sfn|Goertzel|Pennachin|2006}} (AGI) describes research that aims to create machines capable of general intelligent action. The term was introduced by Mark Gubrud in 1997<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. The research objective is much older, for example ]'s ] project (that began in 1984), and ]'s ] project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and ]<ref>{{harvnb|Goertzel|Wang|2006}}. See also {{harvtxt|Wang|2006}} with an up to date summary and lots of links.</ref> as "producing publications and preliminary results". As yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of ]. The research is extremely diverse and often pioneering in nature. In the introduction to his book,{{sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by ] in "]"<ref name=K/> (i.e. between 2015 and 2045) is plausible.{{sfn|Goertzel|2007}} Most mainstream AI researchers doubt that progress will be this rapid. Organizations actively pursuing AGI include ], the ], the ], Bitphase AI,<ref></ref> TexAI.,<ref></ref> ] and the associated ], and ].


===Modern artificial general intelligence research===
===Whole brain emulation===
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by ] in 2000. Named ], the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments".<ref name=":14">{{Cite book |last=Hutter |first=Marcus |url=https://link.springer.com/book/10.1007/b138233 |title=Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability |date=2005 |publisher=Springer |isbn=978-3-5402-6877-2 |series=Texts in Theoretical Computer Science an EATCS Series |doi=10.1007/b138233 |access-date=19 July 2022 |archive-url=https://web.archive.org/web/20220719052038/https://link.springer.com/book/10.1007/b138233 |archive-date=19 July 2022 |url-status=live |s2cid=33352850}}</ref> This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour,<ref>{{Cite thesis |last=Legg |first=Shane |title=Machine Super Intelligence |date=2008 |access-date=19 July 2022 |publisher=University of Lugano |url=http://www.vetta.org/documents/Machine_Super_Intelligence.pdf |archive-url=https://web.archive.org/web/20220615160113/https://www.vetta.org/documents/Machine_Super_Intelligence.pdf |archive-date=15 June 2022 |url-status=live}}</ref> was also called universal artificial intelligence.<ref>{{Cite book |last=Goertzel |first=Ben |url=https://www.researchgate.net/publication/271390398 |title=Artificial General Intelligence |date=2014 |publisher=Journal of Artificial General Intelligence |isbn=978-3-3190-9273-7 |series=Lecture Notes in Computer Science |volume=8598 |doi=10.1007/978-3-319-09274-4 |s2cid=8387410}}</ref>
{{main|Mind uploading}}
A popular approach discussed to achieving general intelligent action is ]. A low-level brain model is built by ] and ] a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a ] model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably.<ref name=Roadmap>
{{Harvnb|Sandberg|Boström|2008}}. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain."</ref> Whole brain emulation is discussed in ] and ], in the context of ] for medical research purposes. It is discussed in ] research{{sfn|Goertzel|2007}} as an approach to strong AI. ] technologies, that could deliver the necessary detailed understanding, are improving rapidly, and ] ] in the book '']''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.


The term AGI was re-introduced and popularized by ] and ] around 2002.<ref>{{Cite web |title=Who coined the term "AGI"? |url=http://goertzel.org/who-coined-the-term-agi/ |url-status=live |archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/ |archive-date=28 December 2018 |access-date=28 December 2018 |website=goertzel.org |language=en-US}}, via ]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{Harvnb|Wang|Goertzel|2007}}</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>{{Cite web |title=First International Summer School in Artificial General Intelligence, Main summer school: June 22 – July 3, 2009, OpenCog Lab: July 6-9, 2009 |url=https://goertzel.org/AGI_Summer_School_2009.htm |url-status=live |archive-url=https://web.archive.org/web/20200928173146/https://www.goertzel.org/AGI_Summer_School_2009.htm |archive-date=28 September 2020 |access-date=11 May 2020}}</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>{{Cite web |title=Избираеми дисциплини 2009/2010 – пролетен триместър |trans-title=Elective courses 2009/2010 – spring trimester |url=http://fmi-plovdiv.org/index.jsp?id=1054&ln=1 |url-status=live |archive-url=https://web.archive.org/web/20200726103659/http://fmi-plovdiv.org/index.jsp?id=1054&ln=1 |archive-date=26 July 2020 |access-date=11 May 2020 |website=Факултет по математика и информатика |language=bg}}</ref> and 2011<ref>{{Cite web |title=Избираеми дисциплини 2010/2011 – зимен триместър |trans-title=Elective courses 2010/2011 – winter trimester |url=http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1 |url-status=live |archive-url=https://web.archive.org/web/20200726094625/http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1 |archive-date=26 July 2020 |access-date=11 May 2020 |website=Факултет по математика и информатика |language=bg}}</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized by ] and featuring a number of guest lecturers.
====Processing requirements====
], and ] and ]), along with the fastest supercomputer from ] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where ] arises.{{sfn|Sandberg|Boström|2008}}]] For low-level brain simulation, an extremely powerful computer would be required. The ] has a huge number of ]. Each of the 10<sup>11</sup> (one hundred billion) ] has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5 x 10<sup>14</sup> synapses (100 to 500 trillion).{{sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) neuron updates per second.{{sfn|Russell|Norvig|2003}} ] looks at various estimates for the hardware required to equal the human brain and adopts a figure of 10<sup>16</sup> computations per second (cps).<ref>In "Mind Children" {{Harvnb|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997, <http://www.transhumanist.com/volume1/moravec.htm> Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.</ref> He uses this figure to predict the necessary hardware will be available sometime between 2015 and 2025, if the current exponential growth in computer power continues.


{{As of|2023}}, a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning,<ref name=":10">{{Cite journal |last1=Shevlin |first1=Henry |last2=Vold |first2=Karina |last3=Crosby |first3=Matthew |last4=Halina |first4=Marta |date=2019-10-04 |title=The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge |journal=EMBO Reports |language=en |volume=20 |issue=10 |pages=e49177 |doi=10.15252/embr.201949177 |issn=1469-221X |pmc=6776890 |pmid=31531926}}</ref><ref name=":11">{{Cite arXiv |eprint=2303.12712 |class=cs.CL |first1=Sébastien |last1=Bubeck |first2=Varun |last2=Chandrasekaran |title=Sparks of Artificial General Intelligence: Early experiments with GPT-4 |date=2023-03-27 |last3=Eldan |first3=Ronen |last4=Gehrke |first4=Johannes |last5=Horvitz |first5=Eric |last6=Kamar |first6=Ece |last7=Lee |first7=Peter |last8=Lee |first8=Yin Tat |last9=Li |first9=Yuanzhi |last10=Lundberg |first10=Scott |last11=Nori |first11=Harsha |last12=Palangi |first12=Hamid |last13=Ribeiro |first13=Marco Tulio |last14=Zhang |first14=Yi}}</ref> which is the idea of allowing AI to continuously learn and innovate like humans do.
====Complications====
A fundamental criticism of the simulated brain approach derives from ] where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref> If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel{{sfn|Goertzel|2007}} proposes virtual embodiment (like ]), but it is not yet known whether this would be sufficient.


=== Feasibility ===
Desktop computers using 2&nbsp;GHz ] microprocessors and capable of more than 10<sup>9</sup> cps have been available since 2005. According to the brain power estimates used by ] (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest<ref></ref> no such simulation exists {{Citation needed|date=April 2011}}. There are at least three reasons for this:
]
* Firstly, the neuron model seems to be oversimplified (see next section).
As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist.<ref name=":17">{{Cite web |date=23 March 2023 |title=Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI |url=https://futurism.com/gpt-4-sparks-of-agi |access-date=2023-12-13 |website=Futurism}}</ref> AI pioneer ] speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder ] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{Cite news |last1=Allen |first1=Paul |last2=Greaves |first2=Mark |date=October 12, 2011 |title=The Singularity Isn't Near |url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/ |access-date=17 September 2014 |work=]}}</ref> Writing in '']'', roboticist ] claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{Cite news |last=Winfield |first=Alan |title=Artificial intelligence will not turn into a Frankenstein's monster |url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |url-status=live |archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |archive-date=17 September 2014 |access-date=17 September 2014 |work=]}}</ref>
* Secondly, there is insufficient understanding of higher cognitive processes<ref>In of Goertzels AGI book Yudkowsky proposes 5 levels of organisation that must be understood - code/data, sensory modality, concept & category, thought, and deliberation (consciousness) - in order to use the available hardware</ref> to establish accurately what the brain's neural activity, observed using techniques such as ], correlates with.
* Thirdly, even if our understanding of cognition advances sufficiently, early simulation programs are likely to be very inefficient and will, therefore, need considerably more hardware.


A further challenge is the lack of clarity in defining what ] entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?<ref>{{Cite journal |last=Deane |first=George |date=2022 |title=Machines That Feel and Think: The Role of Affective Feelings and Mental Action in (Artificial) General Intelligence |url=http://dx.doi.org/10.1162/artl_a_00368 |journal=Artificial Life |volume=28 |issue=3 |pages=289–309 |doi=10.1162/artl_a_00368 |issn=1064-5462 |pmid=35881678 |s2cid=251069071}}</ref>
In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses.<ref>{{Harvnb|Williams|Herrup|1988}}{{citation not found}}</ref><ref> '']''. 9 Jan. 2007</ref> Another estimate is 86 billion neurons of which 16.3 billion are in the ] and 69 billion in the ].{{sfn|Azevedo et al.|2009}} ] synapses are currently unquantified but are known to be extremely numerous.


Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like ] and ], deny the possibility of achieving strong AI.{{Sfn|Clocksin|2003}}<ref name=":0">{{Cite journal |last=Fjelland |first=Ragnar |date=2020-06-17 |title=Why general artificial intelligence will not be realized |journal=Humanities and Social Sciences Communications |language=en |volume=7 |issue=1 |pages=1–9 |doi=10.1057/s41599-020-0494-4 |issn=2662-9992 |s2cid=219710554 |doi-access=free |hdl-access=free |hdl=11250/2726984}}</ref> <!-- "One problem is that while humans are complex, we are not general intelligences." --> ] is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.{{Sfn|McCarthy|2007b}} AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{Cite news |last=Khatchadourian |first=Raffi |date=23 November 2015 |title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? |url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |url-status=live |archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |archive-date=28 January 2016 |access-date=7 February 2016 |work=]}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found above ].
====Modelling the neurons in more detail====
The ] model assumed by ] and used in many current ] implementations is simple compared with ]. A brain simulation would likely have to capture the detailed cellular behaviour of biological ], presently only understood in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require a computer several orders of magnitude larger than ]'s estimate. In addition the estimates do not account for ] which are at least as numerous as neurons, may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal|author=Swaminathan, Nikhil|title=Glia—the other brain cells|journal=Discover|date=Jan–Feb 2011|url=http://discovermagazine.com/2011/jan-feb/62}}</ref>


A report by Stuart Armstrong and Kaj Sotala of the ] found that "over 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.<!-- "There was no difference between predictions made by experts and non-experts." see: https://aiimpacts.org/error-in-armstrong-and-sotala-2012/--><ref>Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI—or Failing To.” In ''Beyond AI: Artificial Dreams'', edited by Jan Romportl, Pavel Ircing, Eva Žáčková, Michal Polák and Radek Schuster, 52–75. Plzeň: University of West Bohemia</ref>
There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The ] project implemented non-real time simulations of a "brain" (with 10<sup>11</sup> neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model.<ref></ref> The ] project used one of the fastest supercomputer architectures in the world, ]'s ] platform, to create a real time simulation of a single rat ] consisting of approximately 10,000 neurons and 10<sup>8</sup> synapses in 2006.<ref>{{cite web|url=http://bluebrain.epfl.ch/Jahia/site/bluebrain/op/edit/pid/19085|title=Project Milestones|work=Blue Brain|accessdate=2008-08-11}}</ref> A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," ], director of the Blue Brain Project said in 2009 at the ] in Oxford.<ref></ref> There have also been controversial claims to have simulated a ]. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better.<ref>, </ref>


In 2023, ] researchers published a detailed evaluation of ]. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."<ref>{{Cite web |date=24 March 2023 |title=Microsoft Now Claims GPT-4 Shows 'Sparks' of General Intelligence |url=https://www.vice.com/en/article/g5ypex/microsoft-now-claims-gpt-4-shows-sparks-of-general-intelligence}}</ref> Another study in 2023 reported that GPT-4 outperforms 99% of humans on the ].<ref>{{Cite web |last=Shimek |first=Cary |date=2023-07-06 |title=AI Outperforms Humans in Creativity Test |url=https://neurosciencenews.com/ai-creativity-23585/ |access-date=2023-10-20 |website=Neuroscience News}}</ref><ref>{{Cite journal |last1=Guzik |first1=Erik E. |last2=Byrge |first2=Christian |last3=Gilde |first3=Christian |date=2023-12-01 |title=The originality of machines: AI takes the Torrance Test |journal=Journal of Creativity |volume=33 |issue=3 |pages=100065 |doi=10.1016/j.yjoc.2023.100065 |issn=2713-3745 |s2cid=261087185 |doi-access=free}}</ref>
] addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?".<ref>http://www.transhumanist.com/volume1/moravec.htm</ref> He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.


] and ] wrote in 2023 that a significant level of general intelligence has already been achieved with ]. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".<ref name=":3">{{Cite journal |last=Arcas |first=Blaise Agüera y |date=2023-10-10 |title=Artificial General Intelligence Is Already Here |url=https://www.noemamag.com/artificial-general-intelligence-is-already-here/ |journal=Noema |language=en-US}}</ref>
===Artificial consciousness research===
{{Main|Artificial consciousness}}


2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiple ] such as text, audio, and images).<ref>{{Cite web |last=Zia |first=Tehseen |date=January 8, 2024 |title=Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024 |url=https://www.unite.ai/unveiling-of-large-multimodal-models-shaping-the-landscape-of-language-models-in-2024/ |access-date=2024-05-26 |website=Unite.ai}}</ref>
Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers{{sfn|Yudkowsky|2006}} regard research that investigates possibilities for implementing consciousness as vital. In an early effort ]{{sfn|Aleksander|1996}} argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand ].


In 2024, OpenAI released ], the first of a series of models that "spend more time thinking before they respond". According to ], this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.<ref>{{Cite web |date=September 12, 2024 |title=Introducing OpenAI o1-preview |url=https://openai.com/index/introducing-openai-o1-preview/ |website=OpenAI}}</ref><ref>{{Cite magazine |last=Knight |first=Will |title=OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step |url=https://www.wired.com/story/openai-o1-strawberry-problem-reasoning/ |access-date=2024-09-17 |magazine=Wired |language=en-US |issn=1059-1028}}</ref>
==Origin of the term: John Searle's strong AI==

{{See also|philosophy of artificial intelligence|Chinese room}}
An ] employee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it’s even more clear with ]." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with ], prompting speculation about the company’s strategic intentions.<ref>{{Cite web |date=13 December 2024 |title=OpenAI Employee Claims AGI Has Been Achieved |url=https://orbitaltoday.com/2024/12/13/openai-employee-claims-agi-has-been-achieved/?utm_source=chatgpt.com |access-date=2024-12-27 |website=Orbital Today}}</ref>

=== Timescales ===
] still lack advanced reasoning and planning capabilities, but rapid progress is expected.<ref>{{Cite web |date=April 19, 2024 |title=Next-Gen AI: OpenAI and Meta's Leap Towards Reasoning Machines |url=https://www.unite.ai/next-gen-ai-openai-and-metas-leap-towards-reasoning-machines/ |access-date=2024-06-07 |website=Unite.ai}}</ref>]]
Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop.{{Sfn|Clocksin|2003}} Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress.{{Sfn|Clocksin|2003}}<ref>{{Cite journal |last=James |first=Alex P. |date=2022 |title=The Why, What, and How of Artificial General Intelligence Chip Development |url=https://ieeexplore.ieee.org/document/9390376 |url-status=live |journal=IEEE Transactions on Cognitive and Developmental Systems |volume=14 |issue=2 |pages=333–347 |arxiv=2012.06338 |doi=10.1109/TCDS.2021.3069871 |issn=2379-8920 |s2cid=228376556 |archive-url=https://web.archive.org/web/20220828140528/https://ieeexplore.ieee.org/document/9390376/ |archive-date=28 August 2022 |access-date=28 August 2022}}</ref><ref>{{Cite journal |last1=Pei |first1=Jing |last2=Deng |first2=Lei |last3=Song |first3=Sen |last4=Zhao |first4=Mingguo |last5=Zhang |first5=Youhui |last6=Wu |first6=Shuang |last7=Wang |first7=Guanrui |last8=Zou |first8=Zhe |last9=Wu |first9=Zhenzhi |last10=He |first10=Wei |last11=Chen |first11=Feng |last12=Deng |first12=Ning |last13=Wu |first13=Si |last14=Wang |first14=Yu |last15=Wu |first15=Yujie |date=2019 |title=Towards artificial general intelligence with hybrid Tianjic chip architecture |url=https://www.nature.com/articles/s41586-019-1424-8 |url-status=live |journal=Nature |language=en |volume=572 |issue=7767 |pages=106–111 |bibcode=2019Natur.572..106P |doi=10.1038/s41586-019-1424-8 |issn=1476-4687 |pmid=31367028 |s2cid=199056116 |archive-url=https://web.archive.org/web/20220829084912/https://www.nature.com/articles/s41586-019-1424-8 |archive-date=29 August 2022 |access-date=29 August 2022}}</ref> For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of ]-enabled ].<ref>{{Cite journal |last1=Pandey |first1=Mohit |last2=Fernandez |first2=Michael |last3=Gentile |first3=Francesco |last4=Isayev |first4=Olexandr |last5=Tropsha |first5=Alexander |last6=Stern |first6=Abraham C. |last7=Cherkasov |first7=Artem |date=March 2022 |title=The transformational role of GPU computing and deep learning in drug discovery |journal=Nature Machine Intelligence |language=en |volume=4 |issue=3 |pages=211–221 |doi=10.1038/s42256-022-00463-x |issn=2522-5839 |s2cid=252081559 |doi-access=free}}</ref>

In the introduction to his 2006 book,{{Sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. {{As of|2007}}, the consensus in the AGI research community seemed to be that the timeline discussed by ] in 2005 in '']''<ref name="K">{{Harv|Kurzweil|2005|p=260}}</ref> (i.e. between 2015 and 2045) was plausible.{{Sfn|Goertzel|2007}} Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.<ref>{{Cite web |last=Grace |first=Katja |date=2016 |title=Error in Armstrong and Sotala 2012 |url=https://aiimpacts.org/error-in-armstrong-and-sotala-2012/ |url-status=live |archive-url=https://web.archive.org/web/20201204012302/https://aiimpacts.org/error-in-armstrong-and-sotala-2012/ |archive-date=4 December 2020 |access-date=2020-08-24 |website=AI Impacts |type=blog}}</ref>

In 2012, ], ], and ] developed a neural network called ], which won the ] competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers).<ref name=":5">{{Cite journal |last=Butz |first=Martin V. |date=2021-03-01 |title=Towards Strong AI |journal=KI – Künstliche Intelligenz |language=en |volume=35 |issue=1 |pages=91–101 |doi=10.1007/s13218-021-00705-x |issn=1610-1987 |s2cid=256065190 |doi-access=free}}</ref> AlexNet was regarded as the initial ground-breaker of the current ] wave.<ref name=":5"/>

In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{Cite journal |last1=Liu |first1=Feng |last2=Shi |first2=Yong |last3=Liu |first3=Ying |date=2017 |title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence |journal=Annals of Data Science |volume=4 |issue=2 |pages=179–191 |arxiv=1709.10242 |doi=10.1007/s40745-017-0109-0 |s2cid=37900130}}</ref><ref>{{Cite web |last=Brien |first=Jörn |date=2017-10-05 |title=Google-KI doppelt so schlau wie Siri |trans-title=Google AI is twice as smart as Siri – but a six-year-old beats both |url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003 |url-status=live |archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/ |archive-date=3 January 2019 |access-date=2 January 2019 |language=de}}</ref>

In 2020, ] developed ], a language model capable of performing many diverse tasks without specific training. According to ] in a ] article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.<ref>{{Cite web |last=Grossman |first=Gary |date=September 3, 2020 |title=We're entering the AI twilight zone between narrow and general AI |url=https://venturebeat.com/2020/09/03/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |url-status=live |archive-url=https://web.archive.org/web/20200904191750/https://venturebeat.com/2020/09/03/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |archive-date=4 September 2020 |access-date=September 5, 2020 |publisher=] |quote="Certainly, too, there are those who claim we are already seeing an early example of an AGI system in the recently announced GPT-3 natural language processing (NLP) neural network. ... So is GPT-3 the first example of an AGI system? This is debatable, but the consensus is that it is not AGI. ... If nothing else, GPT-3 tells us there is a middle ground between narrow and general AI."}}</ref>

In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.<ref>{{Cite news |last=Quach |first=Katyanna |title=A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down |url=https://www.theregister.com/2021/09/08/project_december_openai_gpt_3/ |url-status=live |archive-url=https://web.archive.org/web/20211016232620/https://www.theregister.com/2021/09/08/project_december_openai_gpt_3/ |archive-date=16 October 2021 |access-date=16 October 2021 |publisher=The Register}}</ref>

In 2022, ] developed ], a "general-purpose" system capable of performing more than 600 different tasks.<ref>{{Citation |last=Wiggers |first=Kyle |title=DeepMind's new AI can perform over 600 tasks, from playing games to controlling robots |date=May 13, 2022 |work=] |url=https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |access-date=12 June 2022 |archive-url=https://web.archive.org/web/20220616185232/https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |archive-date=16 June 2022 |url-status=live}}</ref>

In 2023, ] published a study on an early version of OpenAI's ], contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.<ref>{{Cite arXiv |eprint=2303.12712 |class=cs.CL |first1=Sébastien |last1=Bubeck |first2=Varun |last2=Chandrasekaran |title=Sparks of Artificial General Intelligence: Early experiments with GPT-4 |date=22 March 2023 |last3=Eldan |first3=Ronen |last4=Gehrke |first4=Johannes |last5=Horvitz |first5=Eric |last6=Kamar |first6=Ece |last7=Lee |first7=Peter |last8=Lee |first8=Yin Tat |last9=Li |first9=Yuanzhi |last10=Lundberg |first10=Scott |last11=Nori |first11=Harsha |last12=Palangi |first12=Hamid |last13=Ribeiro |first13=Marco Tulio |last14=Zhang |first14=Yi}}</ref>

In 2023, the AI researcher ] stated that:<ref>{{Cite news |last=Metz |first=Cade |date=2023-05-01 |title='The Godfather of A.I.' Leaves Google and Warns of Danger Ahead |url=https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html |access-date=2023-06-07 |work=The New York Times |language=en-US |issn=0362-4331}}</ref>

{{Blockquote|text=The idea that this stuff could actually get smarter than people – a few people believed that, . But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.}}In May 2023, ] similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years.<ref>{{Cite web |last=Bove |first=Tristan |title=A.I. could rival human intelligence in 'just a few years,' says CEO of Google's main A.I. research lab |url=https://fortune.com/2023/05/03/google-deepmind-ceo-agi-artificial-intelligence/ |access-date=2024-09-04 |website=Fortune |language=en}}</ref> In March 2024, ]'s CEO, ], stated his expectation that within five years, AI would be capable of passing any test at least as well as humans.<ref>{{Cite news |last=Nellis |first=Stephen |date=March 2, 2024 |title=Nvidia CEO says AI could pass human tests in five years |url=https://www.reuters.com/technology/nvidia-ceo-says-ai-could-pass-human-tests-five-years-2024-03-01/ |work=Reuters}}</ref> In June 2024, the AI researcher ], a former ] employee, estimated AGI by 2027 to be "strikingly plausible".<ref>{{Cite news |last=Aschenbrenner |first=Leopold |title=SITUATIONAL AWARENESS, The Decade Ahead |url=https://situational-awareness.ai/}}</ref>

== Whole brain emulation ==
{{Main|Whole brain emulation|Brain simulation}}

While the development of ] models like in ] is considered the most promising path to AGI,<ref name=":18">{{Cite news |last=Sullivan |first=Mark |date=October 18, 2023 |title=Why everyone seems to disagree on how to define Artificial General Intelligence |url=https://www.fastcompany.com/90968623/why-everyone-seems-to-disagree-on-how-to-define-artificial-general-intelligence |work=Fast Company}}</ref><ref>{{Cite web |last=Nosta |first=John |date=January 5, 2024 |title=The Accelerating Path to Artificial General Intelligence |url=https://www.psychologytoday.com/intl/blog/the-digital-self/202401/the-accelerating-path-to-artificial-general-intelligence |access-date=2024-03-30 |website=Psychology Today |language=en}}</ref> ] can serve as an alternative approach. With whole brain simulation, a brain model is built by ] and ] a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The ] model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain.<ref>{{Cite web |last=Hickey |first=Alex |title=Whole Brain Emulation: A Giant Step for Neuroscience |url=https://www.emergingtechbrew.com/stories/2019/08/15/whole-brain-emulation-giant-step-neuroscience |access-date=2023-11-08 |website=Tech Brew |language=en-us}}</ref> Whole brain emulation is a type of ] that is discussed in ] and ], and for medical research purposes. It has been discussed in ] research{{Sfn|Goertzel|2007}} as an approach to strong AI. ] technologies that could deliver the necessary detailed understanding are improving rapidly, and ] ] in the book '']''<ref name=K/> predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.

===Early estimates===
] and ]), along with the fastest supercomputer from ] mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where ] arises.{{Sfn|Sandberg|Boström|2008}}|upright=2.6]] For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity of ] within the ]. Each of the 10<sup>11</sup> (one hundred billion) ] has on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 10<sup>15</sup> synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10<sup>14</sup> to 5×10<sup>14</sup> synapses (100 to 500 trillion).{{Sfn|Drachman|2005}} An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10<sup>14</sup> (100 trillion) synaptic updates per second (]).{{Sfn|Russell|Norvig|2003}}

In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10<sup>16</sup> computations per second (cps).{{Efn|In "Mind Children"{{Sfn|Moravec|1988|page=61}} 10<sup>15</sup> cps is used. More recently, in 1997,{{Sfn|Moravec|1998}} Moravec argued for 10<sup>8</sup> MIPS which would roughly correspond to 10<sup>14</sup> cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.}} (For comparison, if a "computation" was equivalent to one "]" – a measure used to rate current ]s – then 10<sup>16</sup> "computations" would be equivalent to 10 ], ], while 10<sup>18</sup> was ].) He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

===Current research===
The ], an ]-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessible ] of the human brain.<ref>{{Cite news |last=Holmgaard Mersh |first=Amalie |date=September 15, 2023 |title=Decade-long European research project maps the human brain |url=https://www.euractiv.com/section/health-consumers/news/decade-long-european-research-project-maps-the-human-brain/ |work=euractiv}}</ref> In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain.

===Criticisms of simulation-based approaches===
The ] model assumed by Kurzweil and used in many current ] implementations is simple compared with ]. A brain simulation would likely have to capture the detailed cellular behaviour of biological ], presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account for ], which are known to play a role in cognitive processes.<ref name="Discover2011JanFeb">{{Cite journal |last=Swaminathan, Nikhil |date=Jan–Feb 2011 |title=Glia—the other brain cells |url=http://discovermagazine.com/2011/jan-feb/62 |url-status=live |journal=Discover |archive-url=https://web.archive.org/web/20140208071350/http://discovermagazine.com/2011/jan-feb/62 |archive-date=8 February 2014 |access-date=24 January 2014}}</ref>

A fundamental criticism of the simulated brain approach derives from ] theory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning.<ref>{{Harvnb|de Vega|Glenberg|Graesser|2008}}. A wide range of views in current research, all of which require grounding to some degree</ref><ref name=":15">{{Cite web |last=Thornton |first=Angela |date=2023-06-26 |title=How uploading our minds to a computer might become possible |url=http://theconversation.com/how-uploading-our-minds-to-a-computer-might-become-possible-206804 |access-date=2023-11-08 |website=The Conversation |language=en-US}}</ref> If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel{{Sfn|Goertzel|2007}} proposes virtual embodiment (like in ]s like '']'') as an option, but it is unknown whether this would be sufficient.

== Philosophical perspective ==
{{See also|Philosophy of artificial intelligence|Turing test}}

=== "Strong AI" as defined in philosophy ===
In 1980, philosopher ] coined the term "strong AI" as part of his ] argument.<ref>{{Harvnb|Searle|1980}}</ref> He proposed a distinction between two hypotheses about artificial intelligence:{{Efn|As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."{{Sfn|Russell|Norvig|2003}}}}

* '''Strong AI hypothesis''': An artificial intelligence system can have "a mind" and "consciousness".
* '''Weak AI hypothesis''': An artificial intelligence system can (only) ''act like'' it thinks and has a mind and consciousness.

The first one he called "strong" because it makes a ''stronger'' statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.<ref>For example:


The term "strong AI" was adopted from the name of a position in the ] first identified by ] as part of his ] argument in 1980.<ref>{{Harvnb|Searle|1980}}</ref> He wanted to distinguish between two different hypotheses about artificial intelligence:<ref>As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." {{Harv|Russell|Norvig|2003}}</ref>
* An artificial intelligence system can ''think'' and have a ''mind''. (The word "mind" has a specific meaning for philosophers, as used in "the ]" or "the ]".)
* An artificial intelligence system can (only) ''act like'' it thinks and has a mind.
The first one is called "the ''strong'' AI hypothesis" and the second is "the ''weak'' AI hypothesis" because the first one makes the ''stronger'' statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage, which is fundamentally different from the subject of this article, is common in academic AI research and textbooks.<ref>Among the many sources that use the term in this way are:
* {{Harvnb|Russell|Norvig|2003}}, * {{Harvnb|Russell|Norvig|2003}},
* (quoted in "High Beam Encyclopedia"), * {{Webarchive|url=https://web.archive.org/web/20071203103022/http://www.encyclopedia.com/doc/1O87-strongAI.html|date=3 December 2007}} (quoted in " Encyclopedia.com"),
* (quoted in "AITopics") * {{Webarchive|url=https://web.archive.org/web/20080719074502/http://www.aaai.org/AITopics/html/phil.html|date=19 July 2008}} (quoted in "AITopics"),
* {{Webarchive|url=https://web.archive.org/web/20080513031753/http://www.cbhd.org/resources/biotech/tongen_2003-11-07.htm|date=13 May 2008}} Anthony Tongen</ref>
*
* (Raymond J. Mooney, University of Texas),
* (Rob Kremer, University of Calgary),
* (Rob Craigen, University of Manitoba),
* Alex Green,
* Bernard,
* Anthony Tongen,
*
</ref>


In contrast to Searle and mainstream AI, some futurists such as ] use the term "strong AI" to mean "human level artificial general intelligence".<ref name="K"/> This is not the same as Searle's ], unless it is assumed that ] is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope.{{Sfn|Russell|Norvig|2003|p=947}}
The term "strong AI" is now used to describe any artificial intelligence system that acts like it has a mind,<ref name=K/> regardless of whether a philosopher would be able to determine if it ''actually'' has a mind or not. As ] and ] write: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{sfn|Russell|Norvig|2003|p=947}} AI researchers ''are'' interested in a related statement:


Mainstream AI is most interested in how a program ''behaves''.<ref>though see ] for curiosity by the field about why a program behaves the way it does</ref> According to ] and ], "as long as the program works, they don't care if you call it real or a simulation."{{Sfn|Russell|Norvig|2003|p=947}} If the program can behave ''as if'' it has a mind, then there is no need to know if it ''actually'' has mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."{{Sfn|Russell|Norvig|2003|p=947}} Thus, for academic AI research, "Strong AI" and "AGI" are two different things.
* An artificial intelligence system can think (''or'' act like it thinks) ''as well as or better than people do''.
This assertion, which hinges on the breadth and power of machine intelligence, ''is'' the subject of this article.


=== Consciousness ===
==Possible explanations for the slow progress of AI research==
''See also {{See section|History of AI|The problems}}''


{{Main article|Artificial consciousness}}
Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level.{{sfn|Clocksin|2003}} A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power.{{sfn|Clocksin|2003}} In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research.{{sfn|Clocksin|2003}}


Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ]:
While most AI researchers believe that strong AI can be achieved in the future, there are some individuals like ] and ] that deny the possibility of achieving AI.{{sfn|Clocksin|2003}} ] was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted.{{sfn|McCarthy|2003}}


* ''']''' (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to ''reason'' about perceptions. Some philosophers, such as ], use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience.<ref>{{Cite news |last=Chalmers |first=David J. |date=August 9, 2023 |title=Could a Large Language Model Be Conscious? |url=https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ |work=Boston Review}}</ref> Determining why and how subjective experience arises is known as the ].<ref>{{Cite web |last=Seth |first=Anil |title=Consciousness |url=https://www.newscientist.com/definition/consciousness/ |access-date=2024-09-05 |website=New Scientist |language=en-US}}</ref> ] explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "]" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not.{{Sfn|Nagel|1974}} In 2022, a Google engineer claimed that the company's AI chatbot, ], had achieved sentience, though this claim was widely disputed by other experts.<ref>{{Cite news |date=11 June 2022 |title=The Google engineer who thinks the company's AI has come to life |url=https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ |access-date=2023-06-12 |newspaper=The Washington Post}}</ref>
Conceptual limitations are another possible reason for the slowness in AI research.{{sfn|Clocksin|2003}} AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum’s observation that intelligence manifests itself only relative to specific social and cultural contexts".{{sfn|Clocksin|2003}}


* ''']''': To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger is able to be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness".{{Efn|] made this point in 1950.{{Sfn|Turing|1950}}}}
Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do.{{sfn|Clocksin|2003}} A problem that is described by David Gelernter is that some people assume that thinking and reasoning are equivalent.{{sfn|Gelernter|2010}} However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers.{{sfn|Gelernter|2010}}


These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals.<ref>{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans/ |access-date=2024-09-05 |magazine=TIME |language=en}}</ref> Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights.<ref>{{Cite web |last=Nosta |first=John |date=December 18, 2023 |title=Should Artificial Intelligence Have Rights? |url=https://www.psychologytoday.com/us/blog/the-digital-self/202312/should-artificial-intelligence-have-rights |access-date=2024-09-05 |website=Psychology Today |language=en-US}}</ref> Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.<ref>{{Cite news |last=Akst |first=Daniel |date=April 10, 2023 |title=Should Robots With Artificial Intelligence Have Moral or Legal Rights? |url=https://www.wsj.com/articles/robots-ai-legal-rights-3c47ef40 |work=The Wall Street Journal}}</ref>
The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI.{{sfn|Goertzel|2007}} Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century.{{sfn|Goertzel|2007}}


==Benefits==
Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers from emulating the function of the human brain into a computer hardware.{{sfn|McCarthy|2007}} Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously can people then overlook solutions to problematic questions.{{sfn|Goertzel|2007}}
AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems.<ref>{{Cite news |date=2021-08-23 |title=Artificial General Intelligence – Do the cost outweigh benefits? |url=https://coe-dsai.nasscom.in/artificial-general-intelligence-do-the-cost-outweigh-benefits/ |access-date=2023-06-07 |language=en-US}}</ref>


AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer.<ref>{{Cite web |date=7 April 2020 |title=How we can Benefit from Advancing Artificial General Intelligence (AGI) – Unite.AI |url=https://www.unite.ai/artificial-general-intelligence-agi/ |access-date=2023-06-07 |website=www.unite.ai}}</ref> It could take care of the elderly,<ref name=":8">{{Cite web |last1=Talty |first1=Jules |last2=Julien |first2=Stephan |title=What Will Our Society Look Like When Artificial Intelligence Is Everywhere? |url=https://www.smithsonianmag.com/innovation/artificial-intelligence-future-scenarios-180968403/ |access-date=2023-06-07 |website=Smithsonian Magazine |language=en-us}}</ref> and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education.<ref name=":8"/> The need to work to subsist could ] if the wealth produced is properly ].<ref name=":8"/><ref name=":9">{{Cite magazine |last=Stevenson |first=Matt |date=2015-10-08 |title=Answers to Stephen Hawking's AMA are Here! |url=https://www.wired.com/brandlab/2015/10/stephen-hawkings-ama/ |access-date=2023-06-08 |magazine=Wired |language=en-US |issn=1059-1028}}</ref> This also raises the question of the place of humans in a radically automated society.
Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment.{{sfn|Clocksin|2003}} When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning.{{sfn|Holte|Choueiry|2003}} Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task.{{sfn|Holte|Choueiry|2003}}


AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as ] or ], while avoiding the associated risks.<ref name=":7">{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: paths, dangers, strategies |date=2017 |publisher=Oxford University Press |isbn=978-0-1996-7811-2 |edition=Reprinted with corrections 2017 |location=Oxford, United Kingdom; New York, New York, USA |language=en |chapter=§ Preferred order of arrival}}</ref> If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true),<ref>{{Cite web |last=Piper |first=Kelsey |date=2018-11-19 |title=How technological progress is making it likelier than ever that humans will destroy ourselves |url=https://www.vox.com/future-perfect/2018/11/19/18097663/nick-bostrom-vulnerable-world-global-catastrophic-risks |access-date=2023-06-08 |website=Vox |language=en}}</ref> it could take measures to drastically reduce the risks<ref name=":7"/> while minimizing the impact of these measures on our quality of life.
The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts.{{sfn|Holte|Choueiry|2003}} The most productive use of abstraction in AI research comes from planning and problem solving.{{sfn|Holte|Choueiry|2003}} Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators.{{sfn|Zucker|2003}}


==Risks==
A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance.{{sfn|McCarthy|2007}} The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.
=== Existential risks ===
{{Main|Existential risk from artificial general intelligence|AI safety}}
AGI may represent multiple types of ], which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".<ref>{{Cite news |last=Doherty |first=Ben |date=2018-05-17 |title=Climate change an 'existential security risk' to Australia, Senate inquiry says |url=https://www.theguardian.com/environment/2018/may/18/climate-change-an-existential-security-risk-to-australia-senate-inquiry-says |access-date=2023-07-16 |work=The Guardian |language=en-GB |issn=0261-3077}}</ref> The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventing ].<ref>{{Cite book |last=MacAskill |first=William |title=What we owe the future |date=2022 |publisher=Basic Books |isbn=978-1-5416-1862-6 |location=New York, NY}}</ref> Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.<ref name=":02">{{Cite book |last=Ord |first=Toby |title=The Precipice: Existential Risk and the Future of Humanity |publisher=Bloomsbury Publishing |date=2020 |isbn=978-1-5266-0021-9 |chapter=Chapter 5: Future Risks, Unaligned Artificial Intelligence}}</ref><ref>{{Cite web |last=Al-Sibai |first=Noor |date=13 February 2022 |title=OpenAI Chief Scientist Says Advanced AI May Already Be Conscious |url=https://futurism.com/the-byte/openai-already-sentient |access-date=2023-12-24 |website=Futurism}}</ref> There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe.<ref>{{Cite web |last=Samuelsson |first=Paul Conrad |date=2019 |title=Artificial Consciousness: Our Greatest Ethical Challenge |url=https://philosophynow.org/issues/132/Artificial_Consciousness_Our_Greatest_Ethical_Challenge |access-date=2023-12-23 |website=Philosophy Now}}</ref><ref>{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans/ |access-date=2023-12-23 |magazine=TIME |language=en}}</ref> Considering how much AGI could improve humanity's future and help reduce other existential risks, ] calls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".<ref name=":02"/>


==== Risk of loss of control and human extinction ====
There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own.{{sfn|Clocksin|2003}} Emotion sums up the experiences of humans because it allows them to remember those experiences.{{sfn|Gelernter|2010}}
The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such as ], ], ], ], ] and ].<ref>{{Cite news |last=Roose |first=Kevin |date=2023-05-30 |title=A.I. Poses 'Risk of Extinction,' Industry Leaders Warn |url=https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html |access-date=2023-12-24 |work=The New York Times |language=en-US |issn=0362-4331}}</ref><ref name=":16"/>


In 2014, ] criticized widespread indifference:
As David Gelernter writes, “No computer will be creative unless it can simulate all the nuances of human emotion.”{{sfn|Gelernter|2010}} This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future.


{{Cquote|So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here{{Emdash}}we'll leave the lights on?' Probably not{{Emdash}}but this is more or less what is happening with AI.<ref name="hawking editorial">{{Cite news |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence&nbsp;– but are we taking AI seriously enough?' |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |url-status=live |archive-url=https://web.archive.org/web/20150925153716/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html |archive-date=25 September 2015 |access-date=3 December 2014 |work=]}}</ref>
==Critique of Strong AI research==
| author =
If research into Strong AI produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to ]. The new intelligence could thus increase exponentially and dramatically surpass humans.
}}The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities.<ref>{{Cite web |last=Herger |first=Mario |title=The Gorilla Problem – Enterprise Garage |url=https://www.enterprisegarage.io/2019/10/the-gorilla-problem/ |access-date=2023-06-07 |language=en-US}}</ref>


The skeptic ] considers that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards".<ref>{{Cite web |title=The fascinating Facebook debate between Yann LeCun, Stuart Russel and Yoshua Bengio about the risks of strong AI |url=https://www.parlonsfutur.com/blog/the-fascinating-facebook-debate-between-yann-lecun-stuart-russel-and-yoshua |access-date=2023-06-08 |website=The fascinating Facebook debate between Yann LeCun, Stuart Russel and Yoshua Bengio about the risks of strong AI |language=fr}}</ref> On the other side, the concept of ] suggests that almost whatever their goals, ]s will have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.<ref>{{Cite web |date=2014-08-22 |title=Will Artificial Intelligence Doom The Human Race Within The Next 100 Years? |url=https://www.huffpost.com/entry/artificial-intelligence-oxford_n_5689858 |access-date=2023-06-08 |website=HuffPost |language=en}}</ref>
Hyper-intelligent software may not necessarily decide to support the continued existence of mankind, and would be extremely difficult to stop.<ref>{{Harvtxt|Yudkowsky, Eliezer|2008}}</ref> This topic has also recently begun to be discussed in academic publications as a real source of ].


Many scholars who are concerned about existential risk advocate for more research into solving the "]" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a ], rather than destructive, manner after it reaches superintelligence?<ref name="physica_scripta2">{{Cite journal |last1=Sotala |first1=Kaj |last2=Yampolskiy |first2=Roman V. |author-link2=Roman Yampolskiy |date=2014-12-19 |title=Responses to catastrophic AGI risk: a survey |journal=] |volume=90 |issue=1 |page=018001 |doi=10.1088/0031-8949/90/1/018001 |issn=0031-8949 |doi-access=free}}</ref><ref>{{Cite book |last=Bostrom |first=Nick |author-link=Nick Bostrom |title=Superintelligence: Paths, Dangers, Strategies |title-link=Superintelligence: Paths, Dangers, Strategies |date=2014 |publisher=Oxford University Press |isbn=978-0-1996-7811-2 |edition=First}}<!-- preface --></ref> Solving the control problem is complicated by the ] (which could lead to a ] of safety precautions in order to release products before competitors),<ref>{{Cite magazine |last1=Chow |first1=Andrew R. |last2=Perrigo |first2=Billy |date=2023-02-16 |title=The AI Arms Race Is On. Start Worrying |url=https://time.com/6255952/ai-impact-chatgpt-microsoft-google/ |access-date=2023-12-24 |magazine=TIME |language=en}}</ref> and the use of AI in weapon systems.<ref>{{Cite web |last=Tetlow |first=Gemma |date=January 12, 2017 |title=AI arms race risks spiralling out of control, report warns |url=https://www.ft.com/content/b56d57e8-d822-11e6-944b-e7eb37a6aa8e |url-access=subscription |url-status=live |archive-url=https://archive.today/20220411043213/https://www.ft.com/content/b56d57e8-d822-11e6-944b-e7eb37a6aa8e |archive-date=11 April 2022 |access-date=2023-12-24 |website=Financial Times}}</ref>
One proposal to deal with this is to ensure that the first generally intelligent AI is ']', and will then be able to control subsequently developed AIs. Some question whether this kind of check could really remain in place.<ref>{{Harvtxt|Berglas|2008}}</ref>


The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI.<ref>{{Cite news |last1=Milmo |first1=Dan |last2=Stacey |first2=Kiran |date=2023-09-25 |title=Experts disagree over threat posed but artificial intelligence cannot be ignored |url=https://www.theguardian.com/technology/2023/sep/25/experts-disagree-over-threat-posed-but-artificial-intelligence-cannot-be-ignored-ai |access-date=2023-12-24 |work=The Guardian |language=en-GB |issn=0261-3077}}</ref> Former ] fraud czar ] considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.<ref>{{Cite web |date=2023-07-20 |title=Humanity, Security & AI, Oh My! (with Ian Bremmer & Shuman Ghosemajumder) |url=https://cafe.com/stay-tuned/humanity-security-ai-oh-my-with-ian-bremmer-shuman-ghosemajumder/ |access-date=2023-09-15 |website=CAFE |language=en-US}}</ref>
==See also==

* ]
Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God.<ref name="atlantic-but-what2">{{Cite magazine |last=Hamblin |first=James |date=9 May 2014 |title=But What Would the End of Humanity Mean for Me? |url=https://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |url-status=live |archive-url=https://web.archive.org/web/20140604211145/http://www.theatlantic.com/health/archive/2014/05/but-what-does-the-end-of-humanity-mean-for-me/361931/ |archive-date=4 June 2014 |access-date=12 December 2015 |magazine=The Atlantic}}</ref> Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.<ref name="telegraph">{{Cite news |last=Titcomb |first=James |date=30 October 2023 |title=Big Tech is stoking fears over AI, warn scientists |url=https://www.telegraph.co.uk/business/2023/10/30/big-tech-stoking-fears-over-ai-warn-scientists/ |access-date=2023-12-07 |work=The Telegraph |language=en}}</ref><ref name="afr">{{Cite web |last=Davidson |first=John |date=30 October 2023 |title=Google Brain founder says big tech is lying about AI extinction danger |url=https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20231207203025/https://www.afr.com/technology/google-brain-founder-says-big-tech-is-lying-about-ai-human-extinction-danger-20231027-p5efnz |archive-date=December 7, 2023 |access-date=2023-12-07 |website=Australian Financial Review |language=en}}</ref>
** ]

** ]
In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."<ref name=":16">{{Cite web |date=May 30, 2023 |title=Statement on AI Risk |url=https://www.safe.ai/statement-on-ai-risk |access-date=2023-06-08 |website=Center for AI Safety}}</ref>
** Future of artificial intelligence

*** ]
===Mass unemployment===
*** ] aka "The Singularity"
{{Further|Technological unemployment}}
*** Unsolved problems of artificial intelligence
Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted".<ref>{{Cite web |last1=Eloundou |first1=Tyna |last2=Manning |first2=Sam |last3=Mishkin |first3=Pamela |last4=Rock |first4=Daniel |date=March 17, 2023 |title=GPTs are GPTs: An early look at the labor market impact potential of large language models |url=https://openai.com/research/gpts-are-gpts |access-date=2023-06-07 |website=OpenAI |language=en-US}}</ref><ref name=":6">{{Cite web |last=Hurst |first=Luke |date=2023-03-23 |title=OpenAI says 80% of workers could see their jobs impacted by AI. These are the jobs most affected |url=https://www.euronews.com/next/2023/03/23/openai-says-80-of-workers-could-see-their-jobs-impacted-by-ai-these-are-the-jobs-most-affe |access-date=2023-06-08 |website=euronews |language=en}}</ref> They consider office workers to be the most exposed, for example mathematicians, accountants or web designers.<ref name=":6"/> AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies.
**** ]

**** ]
According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:<ref name=":9"/>
**** ] (Mind uploading)
{{Cquote|Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality
* ]
}}Elon Musk considers that the automation of society will require governments to adopt a ].<ref>{{Cite web |last=Sheffey |first=Ayelet |date=Aug 20, 2021 |title=Elon Musk says we need universal basic income because 'in the future, physical work will be a choice' |url=https://www.businessinsider.com/elon-musk-universal-basic-income-physical-work-choice-2021-8 |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20230709081853/https://www.businessinsider.com/elon-musk-universal-basic-income-physical-work-choice-2021-8 |archive-date=Jul 9, 2023 |access-date=2023-06-08 |website=Business Insider |language=en-US}}</ref>
* ]

== See also ==
{{Div col|colwidth=18em}}
* {{Annotated link |Artificial brain}}
* ]
* {{Annotated link |AI safety}}
* {{Annotated link |AI alignment}}
* ''{{Annotated link|A.I. Rising}}''
* ]
* {{Annotated link |Automated machine learning}}
* {{Annotated link |BRAIN Initiative}}
* {{Annotated link |China Brain Project}}
* {{Annotated link |Future of Humanity Institute}}
* {{Annotated link |General game playing}}
* {{Annotated link |Generative artificial intelligence}}
* {{Annotated link |Human Brain Project}}
* {{Annotated link |Intelligence amplification}} (IA)
* {{Annotated link |Machine ethics}}
* ]
* {{Annotated link |Multi-task learning}}
* {{Annotated link |Neural scaling law}}
* {{Annotated link |Outline of artificial intelligence}}
* {{Annotated link |Transhumanism}}
* {{Annotated link |Synthetic intelligence}}
* {{Annotated link |Transfer learning}}
* {{Annotated link |Loebner Prize}}
* {{Annotated link |Hardware for artificial intelligence}}
* {{Annotated link |Weak artificial intelligence}}
{{Div col end}}


==Notes== ==Notes==
{{reflist|colwidth=30em}} {{Notelist|30em}}


==References== ==References==
{{refbegin|2}} {{Reflist|30em}}
*TechCast Article Series, Prof. William Halal,
* {{Citation | last = Aleksander |first= Igor| author-link=Igor Aleksander | year=1996| title= Impossible Minds| publisher=World Scientific Publishing Company isbn=978-1-86094-036-1}}
* {{Citation | last = Omohundro|first= Steve| author-link= Steve Omohundro | year = 2008| title= The Nature of Self-Improving Artificial Intelligence| publisher= presented and distributed at the 2007 Singularity Summit, San Francisco, CA.}}


==Sources==
* {{Citation|last=Sandberg |first=Anders|last2=Boström|first2=Nick|title=Whole Brain Emulation: A Roadmap|url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf|accessdate=5 April 2009|series= Technical Report #2008‐3|year=2008| publisher = Future of Humanity Institute, Oxford University}}
{{Refbegin|indent=yes|30em}}
* {{Citation| author = Azevedo FA, Carvalho LR, Grinberg LT, ''et al.''| title = Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain| journal = The Journal of Comparative Neurology| volume = 513| issue = 5| pages = 532–41|date=April 2009| pmid = 19226510| doi = 10.1002/cne.21974|url=http://www.researchgate.net/publication/24024444_Equal_numbers_of_neuronal_and_nonneuronal_cells_make_the_human_brain_an_isometrically_scaled-up_primate_brain |accessdate=2013-09-04| ref={{harvid|Azevedo et al.|2009}}}}
* {{Cite book |url=https://unesdoc.unesco.org/ark:/48223/pf0000377433/PDF/377433eng.pdf.multi |title=UNESCO Science Report: the Race Against Time for Smarter Development. |date=11 June 2021 |publisher=UNESCO |isbn=978-9-2310-0450-6 |location=Paris |access-date=22 September 2021 |archive-url=https://web.archive.org/web/20220618233752/https://unesdoc.unesco.org/ark:/48223/pf0000377433/PDF/377433eng.pdf.multi |archive-date=18 June 2022 |url-status=live}}
* {{Citation
* {{Citation |last=Chalmers |first=David |title=The Conscious Mind |date=1996 |publisher=Oxford University Press. |author-link=David Chalmers}}
| first=Anthony
* {{Citation |last=Clocksin |first=William |title=Artificial intelligence and the future |date=August 2003 |work=] |volume=361 |issue=1809 |pages=1721–1748 |bibcode=2003RSPTA.361.1721C |doi=10.1098/rsta.2003.1232 |pmid=12952683 |s2cid=31032007}}
| last=Berglas
| title=Artificial Intelligence will Kill our Grandchildren
| year=2008
| url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html
| accessdate=2008-06-13
}}
* {{Citation | last = Chalmers |first = David | author-link=David Chalmers | year=1996 | title = The Conscious Mind |publisher=Oxford University Press.}}
* {{Citation | last = Clocksin|first=William |date=Aug 2003 |title=Artificial intelligence and the future|journal=] |pmid=12952683 |volume=361 |issue=1809 |pages=1721–1748 |doi=10.1098/rsta.2003.1232 |postscript=.}}
* {{Crevier 1993}} * {{Crevier 1993}}
* {{Citation | first = Brad | last = Darrach | date=20 November 1970 | title=Meet Shakey, the First Electronic Person | magazine=] | pages = 58−68 }}. * {{Citation |last=Darrach |first=Brad |title=Meet Shakey, the First Electronic Person |date=20 November 1970 |work=] |pages=58–68}}
* {{Citation | last = Drachman | first = D | title = Do we have brain to spare? | journal = Neurology | volume = 64 | issue = 12 | pages = 2004–5 | year = 2005 | pmid = 15985565 | doi = 10.1212/01.WNL.0000166914.38327.BB | postscript = .}} * {{Citation |last=Drachman |first=D. |title=Do we have brain to spare? |journal=Neurology |volume=64 |issue=12 |pages=2004–2005 |date=2005 |doi=10.1212/01.WNL.0000166914.38327.BB |pmid=15985565 |s2cid=38482114}}
* {{Citation | last = Feigenbaum | first = Edward A. | first2=Pamela | last2=McCorduck | author-link=Edward Feigenbaum | author2-link = Pamela McCorduck | title = The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World | publisher = Michael Joseph | year = 1983 | isbn = 0-7181-2401-4 }} * {{Citation |last1=Feigenbaum |first1=Edward A. |title=The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World |date=1983 |publisher=Michael Joseph |isbn=978-0-7181-2401-4 |last2=McCorduck |first2=Pamela |author-link=Edward Feigenbaum |author-link2=Pamela McCorduck}}
* {{Citation |title=Artificial General Intelligence |date=2006 |editor-last=Goertzel |editor-first=Ben |editor-last2=Pennachin |editor-first2=Cassio |url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf |archive-url=https://web.archive.org/web/20130320184603/http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf |archive-date=20 March 2013 |publisher=Springer |isbn=978-3-5402-3733-4}}
* {{Citation
* {{Citation |last=Goertzel |first=Ben |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's ''The Singularity Is Near'', and McDermott's critique of Kurzweil |date=Dec 2007 |work=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=https://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |access-date=1 April 2009 |archive-url=https://web.archive.org/web/20160107042341/http://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |archive-date=7 January 2016 |url-status=live |doi=10.1016/j.artint.2007.10.011 |author-link=Ben Goertzel |doi-access=free}}
| last = Gelernter | first = David
* {{Citation |last=Gubrud |first=Mark |title=Nanotechnology and International Security |date=November 1997 |work=Fifth Foresight Conference on Molecular Nanotechnology |url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ |access-date=7 May 2011 |archive-url=https://web.archive.org/web/20110529215447/http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ |archive-date=29 May 2011 |url-status=live}}
| year = 2010
* {{Citation |last=Howe |first=J. |title=Artificial Intelligence at Edinburgh University: a Perspective |date=November 1994 |url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html |access-date=30 August 2007 |archive-url=https://web.archive.org/web/20070817012000/http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html |archive-date=17 August 2007 |url-status=live}}
| title = Dream-logic, the Internet and Artificial Thought
* {{Citation |last=Johnson |first=Mark |title=The body in the mind |date=1987 |publisher=Chicago |isbn=978-0-2264-0317-5}}
| url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html | accessdate= 25 July 2010
* {{Citation |last=Kurzweil |first=Ray |title=The Singularity is Near |title-link=The Singularity is Near |date=2005 |publisher=Viking Press |author-link=Ray Kurzweil}}
}}
* {{Citation |last=Lighthill |first=Professor Sir James |title=Artificial Intelligence: a paper symposium |date=1973 |chapter=Artificial Intelligence: A General Survey |publisher=Science Research Council |author-link=James Lighthill}}
* {{Citation
* {{Citation |last1=Luger |first1=George |title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving |page= |date=2004 |url=https://archive.org/details/artificialintell0000luge/page/720 |edition=5th |publisher=The Benjamin/Cummings Publishing Company, Inc. |isbn=978-0-8053-4780-7 |last2=Stubblefield |first2=William}}
| editor1-last = Goertzel | editor1-first = Ben | authorlink = Ben Goertzel
* {{Cite book |last=McCarthy |first=John |url=http://www-formal.stanford.edu/jmc/whatisai/whatisai.html |title=What is Artificial Intelligence? |publisher=Stanford University |date=2007b |quote="The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans."}}
| editor2-last = Pennachin | editor2-first= Cassio
* {{Citation |last=Moravec |first=Hans |title=Mind Children |date=1988 |publisher=Harvard University Press |author-link=Hans Moravec}}
| year = 2006
* {{Citation |last=Moravec |first=Hans |title=When will computer hardware match the human brain? |date=1998 |work=Journal of Evolution and Technology |volume=1 |url=http://www.transhumanist.com/volume1/moravec.htm |access-date=23 June 2006 |archive-url=https://web.archive.org/web/20060615031852/http://transhumanist.com/volume1/moravec.htm |archive-date=15 June 2006 |url-status=dead}}
| title=Artificial General Intelligence
* {{Citation |last=Nagel |title=What Is it Like to Be a Bat |journal=Philosophical Review |volume=83 |issue=4 |pages=435–50 |date=1974 |url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf |access-date=7 November 2009 |archive-url=https://web.archive.org/web/20111016181405/http://organizations.utep.edu/Portals/1475/nagel_bat.pdf |archive-date=16 October 2011 |url-status=live |doi=10.2307/2183914 |jstor=2183914}}
| publisher = Springer | url=http://people.inf.elte.hu/csizsekp/ai/books/artificial-general-intelligence-cognitive-technologies.9783540237334.27156.pdf
* {{Cite journal |last1=Newell |first1=Allen |author-link=Allen Newell |last2=Simon |first2=H. A. |author-link2=Herbert A. Simon |date=1976 |title=Computer Science as Empirical Inquiry: Symbols and Search |journal=Communications of the ACM |volume=19 |issue=3 |pages=113–126 |doi=10.1145/360018.360022 |doi-access=free}}
| isbn = 3-540-23733-X
* {{Citation |last=Nilsson |first=Nils |title=Artificial Intelligence: A New Synthesis |date=1998 |publisher=Morgan Kaufmann Publishers |isbn=978-1-5586-0467-4 |author-link=Nils Nilsson (researcher)}}
}}
* {{Citation |last=NRC |title=Funding a Revolution: Government Support for Computing Research |date=1999 |access-date=29 September 2007 |archive-url=https://web.archive.org/web/20080112001018/http://www.nap.edu/readingroom/books/far/ch9.html |archive-date=12 January 2008 |url-status=live |chapter=Developments in Artificial Intelligence |chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html |publisher=National Academy Press |author-link=United States National Research Council}}
* {{Citation
* {{Citation |last1=Poole |first1=David |title=Computational Intelligence: A Logical Approach |date=1998 |url=http://www.cs.ubc.ca/spider/poole/ci.html |access-date=6 December 2007 |archive-url=https://web.archive.org/web/20090725025030/http://www.cs.ubc.ca/spider/poole/ci.html |archive-date=25 July 2009 |url-status=live |place=New York |publisher=Oxford University Press |last2=Mackworth |first2=Alan |last3=Goebel |first3=Randy |author-link=David Poole (researcher)}}
| last = Goertzel | first = Ben | authorlink = Ben Goertzel
| last2 = Wang | first2 = Pei
| year = 2006
| title = Introduction: Aspects of Artificial General Intelligence
| url=http://sites.google.com/site/narswang/publications/wang-goertzel.AGI_Aspects.pdf?attredirects=1
}}
* {{Citation |last=Goertzel|first=Ben |authorlink=Ben Goertzel |date=Dec 2007 |title=Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil|journal=Artificial Intelligence |volume=171 |issue=18, Special Review Issue |pages=1161–1173 |url=http://scholar.google.com/scholar?hl=sv&lr=&cluster=15189798216526465792 |accessdate=1 April 2009 |doi=10.1016/j.artint.2007.10.011 |postscript=.}}
* {{Citation | last = Gubrud | first = Mark | url=http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/ | title = Nanotechnology and International Security | journal= Fifth Foresight Conference on Molecular Nanotechnology |date = November 1997| accessdate= 7 May 2011}}

* {{Citation| last1 = Holte | first1=RC | last2=Choueiry |first2=BY| title = Abstraction and reformulation in artificial intelligence| journal = ]| volume = 358| issue = 1435| pages = 1197–1204| year = 2003| pmid = 12903653| pmc = 1693218| doi = 10.1098/rstb.2003.1317| postscript = .}}
* {{Citation | last = Howe | first = J. | url=http://www.dai.ed.ac.uk/AI_at_Edinburgh_perspective.html | title = Artificial Intelligence at Edinburgh University : a Perspective |date = November 1994| accessdate= 30 August 2007}}
* {{Citation | last = Johnson| first= Mark | year = 1987| title =The body in the mind| publisher =Chicago|isbn= 0-226-40317-3}}
* {{Citation | last = Kurzweil | first = Ray | author-link = Ray Kurzweil | title = ] | year = 2005 | publisher = Viking Press }}
* {{Citation | last = Lighthill | first = Professor Sir James | author-link=James Lighthill | year = 1973 | contribution= Artificial Intelligence: A General Survey | title = Artificial Intelligence: a paper symposium| publisher = Science Research Council }}
* {{Citation | last= Luger | first=George | first2=William | last2= Stubblefield|year=2004 | title=Artificial Intelligence: Structures and Strategies for Complex Problem Solving|edition=5th|publisher=The Benjamin/Cummings Publishing Company, Inc.|page= 720|isbn=0-8053-4780-1|url=http://www.cs.unm.edu/~luger/ai-final/tocfull.html}}
* {{Citation |last=McCarthy|first=John |authorlink=John McCarthy (computer scientist)|date=Oct 2007 |title=From here to human-level AI|journal=Artificial Intelligence |volume=171 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 | postscript=. |issue=18}}
* {{McCorduck 2004}}
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1976 | url= http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html | title = The Role of Raw Power in Intelligence}}
* {{Citation | last = Moravec | first = Hans | author-link=Hans Moravec | year = 1988 | title = Mind Children | publisher = Harvard University Press}}
* {{Citation| last=Nagel | year=1974 | title =What Is it Like to Be a Bat | journal = Philosophical Review | pages=435–50. | url=http://organizations.utep.edu/Portals/1475/nagel_bat.pdf| postscript=. }}
* {{Citation | last = Newell | first = Allen | author-link=Allen Newell | last2 = Simon | first2=H. A. | year = 1963 | contribution=GPS: A Program that Simulates Human Thought| title=Computers and Thought | editor-last= Feigenbaum | editor-first= E.A. |editor2-last= Feldman |editor2-first= J. |publisher= McGraw-Hill | authorlink2 = Herbert A. Simon|publication-place= New York }}
* {{Citation | doi = 10.1145/360018.360022 | last = Newell | first = Allen | last2 = Simon | first2=H. A. | year = 1976 | contribution=Computer Science as Empirical Inquiry: Symbols and Search| title=Communications of the ACM |volume= 19 | pages = 113–126 | journal = Communications of the ACM| author-link=Allen Newell | url = http://portal.acm.org/citation.cfm?id=360022 | authorlink2=Herbert A. Simon|issue=3}}
* {{Citation| last=Nilsson | first=Nils | author-link=Nils Nilsson (researcher) | year=1998|title=Artificial Intelligence: A New Synthesis|publisher=Morgan Kaufmann Publishers|isbn=978-1-55860-467-4}}
* {{Russell Norvig 2003}} * {{Russell Norvig 2003}}
* {{Citation |last1=Sandberg |first1=Anders |title=Whole Brain Emulation: A Roadmap |date=2008 |url=http://www.fhi.ox.ac.uk/Reports/2008-3.pdf |access-date=5 April 2009 |archive-url=https://web.archive.org/web/20200325021252/https://www.fhi.ox.ac.uk/reports/2008-3.pdf |archive-date=25 March 2020 |url-status=live |series=Technical Report #2008-3 |publisher=Future of Humanity Institute, Oxford University |last2=Boström |first2=Nick}}
* {{Citation | last = NRC| author-link=United States National Research Council | chapter=Developments in Artificial Intelligence|chapter-url=http://www.nap.edu/readingroom/books/far/ch9.html|title=Funding a Revolution: Government Support for Computing Research|publisher=National Academy Press|year=1999 | accessdate=30 August 2007}}
* {{Citation |last=Searle |first=John |title=Minds, Brains and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |date=1980 |url=http://cogprints.org/7150/1/10.1.1.83.5248.pdf |access-date=3 September 2020 |archive-url=https://web.archive.org/web/20190317230215/http://cogprints.org/7150/1/10.1.1.83.5248.pdf |archive-date=17 March 2019 |url-status=live |doi=10.1017/S0140525X00005756 |s2cid=55303721 |author-link=John Searle}}
* {{Citation | last = Poole | first = David | first2 = Alan | last2 = Mackworth | first3 = Randy | last3 = Goebel | publisher = Oxford University Press | year = 1998 | title = Computational Intelligence: A Logical Approach | url = http://www.cs.ubc.ca/spider/poole/ci.html | author-link=David Poole (researcher) | publication-place = New York }}
* {{Citation |last=Simon |first=H. A. |title=The Shape of Automation for Men and Management |date=1965 |place=New York |publisher=Harper & Row |author-link=Herbert A. Simon}}
* {{Citation | last= Searle| first= John | author-link=John Searle | url = http://www.bbsonline.org/documents/a/00/00/04/84/bbs00000484-00/bbs.searle2.html | title = Minds, Brains and Programs | journal = Behavioral and Brain Sciences | volume = 3| issue = 3| pages= 417–457 | year = 1980 | doi= 10.1017/S0140525X00005756}}
* {{Turing 1950}}
* {{Citation | last= Simon | first = H. A. | author-link=Herbert A. Simon | year = 1965 | title=The Shape of Automation for Men and Management | publisher =Harper & Row | publication-place = New York }}
* {{Citation |title=Symbols and Embodiment: Debates on meaning and cognition |date=2008 |editor-last=de Vega |editor-first=Manuel |editor-last2=Glenberg |editor-first2=Arthur |publisher=Oxford University Press |isbn=978-0-1992-1727-4 |editor3-last=Graesser |editor3-first=Arthur}}
* {{Citation |last=Sutherland|first= J.G. |year =1990| title= Holographic Model of Memory, Learning, and Expression|journal=International Journal of Neural Systems|volume= 1-3|pages= 256–267 |postscript=.}}
* {{Cite book |last1=Wang |first1=Pei |title=Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms: Proceedings of the AGI Workshop 2006 |last2=Goertzel |first2=Ben |author-link2=Ben Goertzel |publisher=IOS Press |date=2007 |isbn=978-1-5860-3758-1 |pages=1–16 |chapter=Introduction: Aspects of Artificial General Intelligence |access-date=13 December 2020 |chapter-url=https://www.researchgate.net/publication/234801154 |archive-url=https://web.archive.org/web/20210218035513/https://www.researchgate.net/publication/234801154_Introduction_Aspects_of_Artificial_General_Intelligence |archive-date=18 February 2021 |url-status=live |via=ResearchGate}}
* {{Citation| author = Williams RW, Herrup K| title = The control of neuron number| journal = Annual Review of Neuroscience| volume = 11| issue = | pages = 423–53| year = 1988| pmid = 3284447| doi = 10.1146/annurev.ne.11.030188.002231| accessdate = 2009-06-20| postscript = .}}
{{Refend}}
* {{Citation

| editor1-last = de Vega | editor1-first = Manuel
==Further reading==
| editor2-last = Glenberg | editor2-first = Arthur
{{Refbegin|indent=yes|30em}}
| editor3-last = Graesser | editor3-first = Arthur
* {{Citation |last=Aleksander |first=Igor |title=Impossible Minds |date=1996 |url=https://archive.org/details/impossiblemindsm0000alek |publisher=World Scientific Publishing Company |isbn=978-1-8609-4036-1 |author-link=Igor Aleksander |url-access=registration}}
| year = 2008
* {{Citation |title=Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain |vauthors=Azevedo FA, Carvalho LR, Grinberg LT, Farfel J |date=April 2009 |journal=The Journal of Comparative Neurology |volume=513 |issue=5 |pages=532–541 |url=https://www.researchgate.net/publication/24024444 |access-date=4 September 2013 |archive-url=https://web.archive.org/web/20210218035513/https://www.researchgate.net/publication/24024444_Equal_Numbers_of_Neuronal_and_Nonneuronal_Cells_Make_the_Human_Brain_an_Isometrically_Scaled-Up_Primate_Brain |archive-date=18 February 2021 |url-status=live |doi=10.1002/cne.21974 |pmid=19226510 |s2cid=5200449 |display-authors=etal |via=ResearchGate |s2cid-access=free}}
| title = Symbols and Embodiment: Debates on meaning and cognition
* {{Citation |last=Berglas |first=Anthony |title=Artificial Intelligence Will Kill Our Grandchildren (Singularity) |date=January 2012 |orig-date=2008 |url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html |access-date=31 August 2012 |archive-url=https://web.archive.org/web/20140723053223/http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html |archive-date=23 July 2014 |url-status=live}}
| publisher = Oxford University Press
* ], "Ready for Robots? How to Think about the Future of AI", '']'', vol. 98, no. 4 (July/August 2019), pp. 192–98. ], historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist ] writes: "Current ] ]s are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
| isbn=0-19-921727-0
* {{Citation |last=Gelernter |first=David |title=Dream-logic, the Internet and Artificial Thought |url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html |access-date=25 July 2010 |archive-url=https://web.archive.org/web/20100726055120/http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html |archive-date=26 July 2010 |url-status=dead |publisher=Edge}}
}}
* ], "The Fate of Free Will" (review of ], ''Free Agents: How Evolution Gave Us Free Will'', Princeton University Press, 2023, 333 pp.), '']'', vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "] is what distinguishes us from machines. For biological creatures, ] and ] come from acting in the world and experiencing the consequences. ]s – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
* {{Citation|last=Wang|first=Pei|year=2006|title=Artificial General Intelligence : A Gentle Introduction|url=http://sites.google.com/site/narswang/home/agi-introduction website}}
* {{Cite web |last=Halal |first=William E. |title=TechCast Article Series: The Automation of Thought |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}}
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| editor-last= Goertzel|editor-first= Ben| editor2-last= Pennachin|editor2-first= Cassio| title=Artificial General Intelligence|publisher = Springer| year= 2006| url=http://www.singinst.org/upload/LOGI//LOGI.pdf| doi=10.1146/annurev.psych.49.1.585;jsessionid=o4K4TVe3OdBd| isbn=3-540-23733-X}}
* Halpern, Sue, "The Coming Tech Autocracy" (review of ], ''AI Needs You: How We Can Change AI's Future and Save Our Own'', Princeton University Press, 274 pp.; ], ''Taming Silicon Valley: How We Can Ensure That AI Works for Us'', MIT Press, 235 pp.; ] and ], ''The Mind's Mirror: Risk and Reward in the Age of AI'', Norton, 280 pp.; ], ''Code Dependent: Living in the Shadow of AI'', Henry Holt, 311 pp.), '']'', vol. LXXI, no. 17 (7 November 2024), pp. 44–46. "'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes . 'We can't count on ]s driven by ] contributions to push back.'... Marcus details the demands that citizens should make of their governments and the ]. They include ] on how AI systems work; compensation for individuals if their data used to train LLMs (])s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating ], imposing cash penalites, and passing stricter ] laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the ], the ], or the ], might provide the most robust oversight.... he ] law professor ]... suggests... establish a professional licensing regime for engineers that would function in a similar way to ]s, ] suits, and the ] in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to ]?'" (p. 46.)
* {{Citation |last=Zucker|first=Jean-Daniel |date=July 2003 |title=A grounded theory of abstraction in artificial intelligence|journal=] |pmid=12903672 |volume=358 |issue=1435 |pmc=1693211 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |postscript=.}}
* {{Citation |last1=Holte |first1=R. C. |title=Abstraction and reformulation in artificial intelligence |work=] |volume=358 |issue=1435 |pages=1197–1204 |date=2003 |doi=10.1098/rstb.2003.1317 |pmc=1693218 |pmid=12903653 |last2=Choueiry |first2=B. Y.}}
* {{Citation| last=Yudkowsky | first=Eliezer | author-link=Eliezer Yudkowsky| year=2008 |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |journal=Global Catastrophic Risks |publisher=Oxford University Press}}.
* ], "A Murder Mystery Puzzle: The literary puzzle '']'', which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", '']'', vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (]) models are capable of incredible feats, their abilities are very much limited by the amount of ] they receive. This could cause for researchers who hope to use them to do things such as analyze ]s. In some cases, there are few historical records on long-gone ]s to serve as ] for such a purpose." (p. 82.)
{{refend}}
* ], "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", '']'', 20 November 2023, pp. 54–59. "If by ']' we mean realistic videos produced using ] that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of ]s, especially smutty ones." (p. 59.)
* Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research", '']'', vol. 330, no. 6 (June 2024), pp. 80-81.
* ], "The Chit-Chatbot: Is talking with a machine a conversation?", '']'', 7 October 2024, pp. 12–16.
* ], "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", '']'', vol. 327, no. 4 (October 2022), pp. 42–45.
* {{Citation |last=McCarthy |first=John |title=From here to human-level AI |date=Oct 2007 |journal=Artificial Intelligence |volume=171 |issue=18 |pages=1174–1182 |doi=10.1016/j.artint.2007.10.009 |author-link=John McCarthy (computer scientist) |doi-access=free}}
* {{McCorduck 2004|ref=none}}
* {{Citation |last=Moravec |first=Hans |title=The Role of Raw Power in Intelligence |date=1976 |url=http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html |access-date=29 September 2007 |archive-url=https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html |archive-date=3 March 2016 |url-status=dead |author-link=Hans Moravec}}
* {{Citation |last1=Newell |first1=Allen |title=Computers and Thought |date=1963 |editor-last=Feigenbaum |editor-first=E. A. |editor-last2=Feldman |editor-first2=J. |chapter=GPS: A Program that Simulates Human Thought |place=New York |publisher=McGraw-Hill |last2=Simon |first2=H. A. |author-link=Allen Newell |author-link2=Herbert A. Simon}}
* {{Citation |last=Omohundro |first=Steve |title=The Nature of Self-Improving Artificial Intelligence |date=2008 |publisher=presented and distributed at the 2007 Singularity Summit, San Francisco, California |author-link=Steve Omohundro}}
* ], "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", '']'', 20 November 2023, pp. 20–26.
* ], "AI's IQ: ] aced a test but showed that ] cannot be measured by ] alone", '']'', vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ] fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."
* Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", '']'', vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)
* {{Citation |last=Sutherland |first=J. G. |title=Holographic Model of Memory, Learning, and Expression |work=International Journal of Neural Systems |volume=1–3 |pages=256–267 |date=1990}}
* Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", '']'', vol. 46, no. 19 (10 October 2024), pp. 29–32. " programs are made possible by new technologies but rely on the timelelss human tendency to ]." (p. 29.)
* {{Citation |last1=Williams |first1=R. W. |title=The control of neuron number |journal=Annual Review of Neuroscience |volume=11 |pages=423–453 |date=1988 |doi=10.1146/annurev.ne.11.030188.002231 |pmid=3284447 |last2=Herrup |first2=K.}}<!--| access-date = 20 June 2009-->
* {{Citation |last=Yudkowsky |first=Eliezer |title=Artificial General Intelligence |journal=Annual Review of Psychology |volume=49 |pages=585–612 |date=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |archive-url=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archive-date=11 April 2009 |url-status=dead |publisher=Springer |doi=10.1146/annurev.psych.49.1.585 |isbn=978-3-5402-3733-4 |pmid=9496632 |author-link=Eliezer Yudkowsky}}
* {{Citation |last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |work=Global Catastrophic Risks |date=2008 |bibcode=2008gcr..book..303Y |doi=10.1093/oso/9780198570509.003.0021 |isbn=978-0-1985-7050-9 |author-link=Eliezer Yudkowsky}}
* {{Citation |last=Zucker |first=Jean-Daniel |title=A grounded theory of abstraction in artificial intelligence |date=July 2003 |work=] |volume=358 |issue=1435 |pages=1293–1309 |doi=10.1098/rstb.2003.1308 |pmc=1693211 |pmid=12903672}}
{{Refend}}


==External links== ==External links==
* *

* A Mass Collaboration for Strong AI
{{Artificial intelligence navbox}}
* - a neuromorphic model based on holographic neural processing
{{Existential risk from artificial intelligence}}
*
* - Intelligent Artifact's framework for developing general intelligence agents
*
*
* &mdash; Modern research on the computations that underlay human intelligence
* , article at Adaptive AI.
*
*
*
*
*
* , An open source project to create artificial intelligence
* , An online game that promotes the development of artificial general intelligence
*
{{Use dmy dates|date=April 2011}}


{{DEFAULTSORT:Artificial general intelligence}} {{DEFAULTSORT:Artificial general intelligence}}
Line 279: Line 356:
] ]
] ]
]

] ]

Latest revision as of 07:04, 15 January 2025

Type of AI with wide-ranging abilities Not to be confused with Generative artificial intelligence or Artificial superintelligence.

Part of a series on
Artificial intelligence
Major goals
Approaches
Applications
Philosophy
History
Glossary

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

Creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries.

The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here. Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.

There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI. AGI is a common topic in science fiction and futures studies.

Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk.

Terminology

AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action.

Some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness. In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.

A framework for classifying AGI in levels was proposed in 2023 by Google DeepMind researchers. They define five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI.

Characteristics

Main article: Artificial intelligence

Various popular definitions of intelligence have been proposed. One of the leading proposals is the Turing test. However, there are other well-known definitions, and some researchers disagree with the more popular approaches.

Intelligence traits

However, researchers generally hold that intelligence is required to do all of the following:

Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.

Computer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). There is debate about whether modern AI systems possess them to an adequate degree.

Physical traits

Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:

This includes the ability to detect and respond to hazard.

Although the ability to sense (e.g. see, hear, etc.) and the ability to act (e.g. move and manipulate objects, change location to explore, etc.) can be desirable for some intelligent systems, these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses. This interpretation aligns with the understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional "eyes and ears".

Tests for human-level AGI

Several tests meant to confirm human-level AGI have been considered, including:

The Turing Test (Turing)
The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior and may incentivize artificial stupidity.
Proposed by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine.
Turing described the test as follows:

The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be expert about machines, must be taken in by the pretence.

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant skepticism from the AI research community, who questioned the test's implementation and its relevance to AGI.
More recently, a 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%).
The Robot College Student Test (Goertzel)
A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes.
The Employment Test (Nilsson)
A machine performs an economically important job at least as well as humans in the same job. AIs are now replacing humans in many roles as varied as fast food and marketing.
The Ikea test (Marcus)
Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly.
The Coffee Test (Wozniak)
A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This has not yet been completed.
The Modern Turing Test (Suleyman)
An AI model is given $100,000 and has to obtain $1 million.

AI-complete problems

Main article: AI-complete

A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm.

There are many problems that have been conjectured to require general intelligence to solve as well as humans. Examples include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance.

However, many of these tasks can now be performed by modern large language models. According to Stanford University's 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.

History

Classical AI

Main articles: History of artificial intelligence and Symbolic artificial intelligence

Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."

Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".

Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were directed at AGI.

However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". In the early 1980s, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer".

Narrow AI research

Main article: Artificial intelligence

In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as speech recognition and recommendation algorithms. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. As of 2018, development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.

At the turn of the century, many mainstream AI researchers hoped that strong AI could be developed by combining programs that solve various sub-problems. Hans Moravec wrote in 1988:

I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.

However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the symbol grounding hypothesis by stating:

The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).

Modern artificial general intelligence research

The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments". This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour, was also called universal artificial intelligence.

The term AGI was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.

As of 2023, a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning, which is the idea of allowing AI to continuously learn and innovate like humans do.

Feasibility

Surveys about when experts expect artificial general intelligence.

As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist. AI pioneer Herbert A. Simon speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.

A further challenge is the lack of clarity in defining what intelligence entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?

Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI. John McCarthy is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted. AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found above Tests for confirming human-level AGI.

A report by Stuart Armstrong and Kaj Sotala of the Machine Intelligence Research Institute found that "over 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.

In 2023, Microsoft researchers published a detailed evaluation of GPT-4. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." Another study in 2023 reported that GPT-4 outperforms 99% of humans on the Torrance tests of creative thinking.

Blaise Agüera y Arcas and Peter Norvig wrote in 2023 that a significant level of general intelligence has already been achieved with frontier models. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".

2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiple modalities such as text, audio, and images).

In 2024, OpenAI released o1-preview, the first of a series of models that "spend more time thinking before they respond". According to Mira Murati, this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.

An OpenAI employee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it’s even more clear with O1." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with Microsoft, prompting speculation about the company’s strategic intentions.

Timescales

AI has surpassed humans on a variety of language understanding and visual understanding benchmarks. As of 2023, foundation models still lack advanced reasoning and planning capabilities, but rapid progress is expected.

Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop. Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress. For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of GPU-enabled CPUs.

In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. As of 2007, the consensus in the AGI research community seemed to be that the timeline discussed by Ray Kurzweil in 2005 in The Singularity is Near (i.e. between 2015 and 2045) was plausible. Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.

In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton developed a neural network called AlexNet, which won the ImageNet competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers). AlexNet was regarded as the initial ground-breaker of the current deep learning wave.

In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.

In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.

In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.

In 2022, DeepMind developed Gato, a "general-purpose" system capable of performing more than 600 different tasks.

In 2023, Microsoft Research published a study on an early version of OpenAI's GPT-4, contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.

In 2023, the AI researcher Geoffrey Hinton stated that:

The idea that this stuff could actually get smarter than people – a few people believed that, . But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

In May 2023, Demis Hassabis similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years. In March 2024, Nvidia's CEO, Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans. In June 2024, the AI researcher Leopold Aschenbrenner, a former OpenAI employee, estimated AGI by 2027 to be "strikingly plausible".

Whole brain emulation

Main articles: Whole brain emulation and Brain simulation

While the development of transformer models like in ChatGPT is considered the most promising path to AGI, whole brain emulation can serve as an alternative approach. With whole brain simulation, a brain model is built by scanning and mapping a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The simulation model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain. Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.

Early estimates

Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises.

For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity of synapses within the human brain. Each of the 10 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 10 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10 to 5×10 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 10 (100 trillion) synaptic updates per second (SUPS).

In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 10 computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate current supercomputers – then 10 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011, while 10 was achieved in 2022.) He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

Current research

The Human Brain Project, an EU-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessible atlas of the human brain. In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain.

Criticisms of simulation-based approaches

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.

A fundamental criticism of the simulated brain approach derives from embodied cognition theory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

Philosophical perspective

See also: Philosophy of artificial intelligence and Turing test

"Strong AI" as defined in philosophy

In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:

  • Strong AI hypothesis: An artificial intelligence system can have "a mind" and "consciousness".
  • Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.

In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope.

Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there is no need to know if it actually has mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two different things.

Consciousness

Main article: Artificial consciousness

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:

  • Sentience (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousness. Thomas Nagel explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company's AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
  • Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger is able to be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness".

These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.

Benefits

AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems.

AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.

AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.

Risks

Existential risks

Main articles: Existential risk from artificial general intelligence and AI safety

AGI may represent multiple types of existential risk, which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development". The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventing moral progress. Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime. There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe. Considering how much AGI could improve humanity's future and help reduce other existential risks, Toby Ord calls these existential risks "an argument for proceeding with due caution", not for "abandoning AI".

Risk of loss of control and human extinction

The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such as Elon Musk, Bill Gates, Geoffrey Hinton, Yoshua Bengio, Demis Hassabis and Sam Altman.

In 2014, Stephen Hawking criticized widespread indifference:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here—we'll leave the lights on?' Probably not—but this is more or less what is happening with AI.

The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities.

The skeptic Yann LeCun considers that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards". On the other side, the concept of instrumental convergence suggests that almost whatever their goals, intelligent agents will have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions.

Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? Solving the control problem is complicated by the AI arms race (which could lead to a race to the bottom of safety precautions in order to release products before competitors), and the use of AI in weapon systems.

The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI. Former Google fraud czar Shuman Ghosemajumder considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.

Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God. Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products.

In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Mass unemployment

Further information: Technological unemployment

Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted". They consider office workers to be the most exposed, for example mathematicians, accountants or web designers. AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies.

According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed:

Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality

Elon Musk considers that the automation of society will require governments to adopt a universal basic income.

See also

Notes

  1. ^ See below for the origin of the term "strong AI", and see the academic definition of "strong AI" and weak AI in the article Chinese room.
  2. AI founder John McCarthy writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." (For a discussion of some definitions of intelligence used by artificial intelligence researchers, see philosophy of artificial intelligence.)
  3. The Lighthill report specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. In the U.S., DARPA became determined to fund only "mission-oriented direct research, rather than basic undirected research".
  4. As AI founder John McCarthy writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case."
  5. In "Mind Children" 10 cps is used. More recently, in 1997, Moravec argued for 10 MIPS which would roughly correspond to 10 cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced.
  6. As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  7. Alan Turing made this point in 1950.

References

  1. Krishna, Sri (9 February 2023). "What is artificial narrow intelligence (ANI)?". VentureBeat. Retrieved 1 March 2024. ANI is designed to perform a single task.
  2. "OpenAI Charter". OpenAI. Retrieved 6 April 2023. Our mission is to ensure that artificial general intelligence benefits all of humanity.
  3. Heath, Alex (18 January 2024). "Mark Zuckerberg's new goal is creating artificial general intelligence". The Verge. Retrieved 13 June 2024. Our vision is to build AI that is better than human-level at all of the human senses.
  4. Baum, Seth D. (2020). A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (PDF) (Report). Global Catastrophic Risk Institute. Retrieved 28 November 2024. 72 AGI R&D projects were identified as being active in 2020.
  5. ^ "AI timelines: What do experts in artificial intelligence expect for the future?". Our World in Data. Retrieved 6 April 2023.
  6. Metz, Cade (15 May 2023). "Some Researchers Say A.I. Is Already Here, Stirring Debate in Tech Circles". The New York Times. Retrieved 18 May 2023.
  7. "AI pioneer Geoffrey Hinton quits Google and warns of danger ahead". The New York Times. 1 May 2023. Retrieved 2 May 2023. It is hard to see how you can prevent the bad actors from using it for bad things.
  8. Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric (2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv preprint. arXiv:2303.12712. GPT-4 shows sparks of AGI.
  9. Butler, Octavia E. (1993). Parable of the Sower. Grand Central Publishing. ISBN 978-0-4466-7550-5. All that you touch you change. All that you change changes you.
  10. Vinge, Vernor (1992). A Fire Upon the Deep. Tor Books. ISBN 978-0-8125-1528-2. The Singularity is coming.
  11. Morozov, Evgeny (30 June 2023). "The True Threat of Artificial Intelligence". The New York Times. The real threat is not AI itself but the way we deploy it.
  12. "Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks". ABC News. 23 March 2023. Retrieved 6 April 2023. AGI could pose existential risks to humanity.
  13. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. ISBN 978-0-1996-7811-2. The first superintelligence will be the last invention that humanity needs to make.
  14. Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. Mitigating the risk of extinction from AI should be a global priority.
  15. "Statement on AI Risk". Center for AI Safety. Retrieved 1 March 2024. AI experts warn of risk of extinction from AI.
  16. Mitchell, Melanie (30 May 2023). "Are AI's Doomsday Scenarios Worth Taking Seriously?". The New York Times. We are far from creating machines that can outthink us in general ways.
  17. LeCun, Yann (June 2023). "AGI does not present an existential risk". Medium. There is no reason to fear AI as an existential threat.
  18. Kurzweil 2005, p. 260.
  19. ^ Kurzweil, Ray (5 August 2005), "Long Live AI", Forbes, archived from the original on 14 August 2005: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence."
  20. "The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013". Archived from the original on 26 February 2014. Retrieved 22 February 2014.
  21. Newell & Simon 1976, This is the term they use for "human-level" intelligence in the physical symbol system hypothesis.
  22. "The Open University on Strong and Weak AI". Archived from the original on 25 September 2009. Retrieved 8 October 2007.
  23. "What is artificial superintelligence (ASI)? | Definition from TechTarget". Enterprise AI. Retrieved 8 October 2023.
  24. "Artificial intelligence is transforming our world – it is on all of us to make sure that it goes well". Our World in Data. Retrieved 8 October 2023.
  25. Dickson, Ben (16 November 2023). "Here is how far we are to achieving AGI, according to DeepMind". VentureBeat.
  26. McCarthy, John (2007a). "Basic Questions". Stanford University. Archived from the original on 26 October 2007. Retrieved 6 December 2007.
  27. This list of intelligent traits is based on the topics covered by major AI textbooks, including: Russell & Norvig 2003, Luger & Stubblefield 2004, Poole, Mackworth & Goebel 1998 and Nilsson 1998.
  28. Johnson 1987
  29. de Charms, R. (1968). Personal causation. New York: Academic Press.
  30. ^ Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). ISBN 0-2621-6239-3
  31. White, R. W. (1959). "Motivation reconsidered: The concept of competence". Psychological Review. 66 (5): 297–333. doi:10.1037/h0040934. PMID 13844397. S2CID 37385966.
  32. White, R. W. (1959). "Motivation reconsidered: The concept of competence". Psychological Review. 66 (5): 297–333. doi:10.1037/h0040934. PMID 13844397. S2CID 37385966.
  33. Muehlhauser, Luke (11 August 2013). "What is AGI?". Machine Intelligence Research Institute. Archived from the original on 25 April 2014. Retrieved 1 May 2014.
  34. "What is Artificial General Intelligence (AGI)? | 4 Tests For Ensuring Artificial General Intelligence". Talky Blog. 13 July 2019. Archived from the original on 17 July 2019. Retrieved 17 July 2019.
  35. Kirk-Giannini, Cameron Domenico; Goldstein, Simon (16 October 2023). "AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does?". The Conversation. Retrieved 22 September 2024.
  36. ^ Turing 1950.
  37. Turing, Alan (1952). B. Jack Copeland (ed.). Can Automatic Calculating Machines Be Said To Think?. Oxford: Oxford University Press. pp. 487–506. ISBN 978-0-1982-5079-1.
  38. "Eugene Goostman is a real boy – the Turing Test says so". The Guardian. 9 June 2014. ISSN 0261-3077. Retrieved 3 March 2024.
  39. "Scientists dispute whether computer 'Eugene Goostman' passed Turing test". BBC News. 9 June 2014. Retrieved 3 March 2024.
  40. Jones, Cameron R.; Bergen, Benjamin K. (9 May 2024). "People cannot distinguish GPT-4 from a human in a Turing test". arXiv:2405.08007 .
  41. Varanasi, Lakshmi (21 March 2023). "AI models like ChatGPT and GPT-4 are acing everything from the bar exam to AP Biology. Here's a list of difficult exams both AI versions have passed". Business Insider. Retrieved 30 May 2023.
  42. Naysmith, Caleb (7 February 2023). "6 Jobs Artificial Intelligence Is Already Replacing and How Investors Can Capitalize on It". Retrieved 30 May 2023.
  43. Turk, Victoria (28 January 2015). "The Plan to Replace the Turing Test with a 'Turing Olympics'". Vice. Retrieved 3 March 2024.
  44. Gopani, Avi (25 May 2022). "Turing Test is unreliable. The Winograd Schema is obsolete. Coffee is the answer". Analytics India Magazine. Retrieved 3 March 2024.
  45. Bhaimiya, Sawdah (20 June 2023). "DeepMind's co-founder suggested testing an AI chatbot's ability to turn $100,000 into $1 million to measure human-like intelligence". Business Insider. Retrieved 3 March 2024.
  46. Suleyman, Mustafa (14 July 2023). "Mustafa Suleyman: My new Turing test would see if AI can make $1 million". MIT Technology Review. Retrieved 3 March 2024.
  47. Shapiro, Stuart C. (1992). "Artificial Intelligence" (PDF). In Stuart C. Shapiro (ed.). Encyclopedia of Artificial Intelligence (Second ed.). New York: John Wiley. pp. 54–57. Archived (PDF) from the original on 1 February 2016. (Section 4 is on "AI-Complete Tasks".)
  48. Yampolskiy, Roman V. (2012). Xin-She Yang (ed.). "Turing Test as a Defining Feature of AI-Completeness" (PDF). Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM): 3–17. Archived (PDF) from the original on 22 May 2013.
  49. "AI Index: State of AI in 13 Charts". Stanford University Human-Centered Artificial Intelligence. 15 April 2024. Retrieved 27 May 2024.
  50. Crevier 1993, pp. 48–50
  51. Kaplan, Andreas (2022). "Artificial Intelligence, Business and Civilization – Our Fate Made in Machines". Archived from the original on 6 May 2022. Retrieved 12 March 2022.
  52. Simon 1965, p. 96 quoted in Crevier 1993, p. 109
  53. "Scientist on the Set: An Interview with Marvin Minsky". Archived from the original on 16 July 2012. Retrieved 5 April 2008.
  54. Marvin Minsky to Darrach (1970), quoted in Crevier (1993, p. 109).
  55. Lighthill 1973; Howe 1994
  56. ^ NRC 1999, "Shift to Applied Research Increases Investment".
  57. Crevier 1993, pp. 115–117; Russell & Norvig 2003, pp. 21–22.
  58. Crevier 1993, p. 211, Russell & Norvig 2003, p. 24 and see also Feigenbaum & McCorduck 1983
  59. Crevier 1993, pp. 161–162, 197–203, 240; Russell & Norvig 2003, p. 25.
  60. Crevier 1993, pp. 209–212
  61. McCarthy, John (2000). "Reply to Lighthill". Stanford University. Archived from the original on 30 September 2008. Retrieved 29 September 2007.
  62. Markoff, John (14 October 2005). "Behind Artificial Intelligence, a Squadron of Bright Real People". The New York Times. Archived from the original on 2 February 2023. Retrieved 18 February 2017. At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.
  63. Russell & Norvig 2003, pp. 25–26
  64. "Trends in the Emerging Tech Hype Cycle". Gartner Reports. Archived from the original on 22 May 2019. Retrieved 7 May 2019.
  65. ^ Moravec 1988, p. 20
  66. Harnad, S. (1990). "The Symbol Grounding Problem". Physica D. 42 (1–3): 335–346. arXiv:cs/9906002. Bibcode:1990PhyD...42..335H. doi:10.1016/0167-2789(90)90087-6. S2CID 3204300.
  67. Gubrud 1997
  68. Hutter, Marcus (2005). Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Texts in Theoretical Computer Science an EATCS Series. Springer. doi:10.1007/b138233. ISBN 978-3-5402-6877-2. S2CID 33352850. Archived from the original on 19 July 2022. Retrieved 19 July 2022.
  69. Legg, Shane (2008). Machine Super Intelligence (PDF) (Thesis). University of Lugano. Archived (PDF) from the original on 15 June 2022. Retrieved 19 July 2022.
  70. Goertzel, Ben (2014). Artificial General Intelligence. Lecture Notes in Computer Science. Vol. 8598. Journal of Artificial General Intelligence. doi:10.1007/978-3-319-09274-4. ISBN 978-3-3190-9273-7. S2CID 8387410.
  71. "Who coined the term "AGI"?". goertzel.org. Archived from the original on 28 December 2018. Retrieved 28 December 2018., via Life 3.0: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'
  72. Wang & Goertzel 2007
  73. "First International Summer School in Artificial General Intelligence, Main summer school: June 22 – July 3, 2009, OpenCog Lab: July 6-9, 2009". Archived from the original on 28 September 2020. Retrieved 11 May 2020.
  74. "Избираеми дисциплини 2009/2010 – пролетен триместър" [Elective courses 2009/2010 – spring trimester]. Факултет по математика и информатика (in Bulgarian). Archived from the original on 26 July 2020. Retrieved 11 May 2020.
  75. "Избираеми дисциплини 2010/2011 – зимен триместър" [Elective courses 2010/2011 – winter trimester]. Факултет по математика и информатика (in Bulgarian). Archived from the original on 26 July 2020. Retrieved 11 May 2020.
  76. Shevlin, Henry; Vold, Karina; Crosby, Matthew; Halina, Marta (4 October 2019). "The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge". EMBO Reports. 20 (10): e49177. doi:10.15252/embr.201949177. ISSN 1469-221X. PMC 6776890. PMID 31531926.
  77. Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (27 March 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 .
  78. "Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI". Futurism. 23 March 2023. Retrieved 13 December 2023.
  79. Allen, Paul; Greaves, Mark (12 October 2011). "The Singularity Isn't Near". MIT Technology Review. Retrieved 17 September 2014.
  80. Winfield, Alan. "Artificial intelligence will not turn into a Frankenstein's monster". The Guardian. Archived from the original on 17 September 2014. Retrieved 17 September 2014.
  81. Deane, George (2022). "Machines That Feel and Think: The Role of Affective Feelings and Mental Action in (Artificial) General Intelligence". Artificial Life. 28 (3): 289–309. doi:10.1162/artl_a_00368. ISSN 1064-5462. PMID 35881678. S2CID 251069071.
  82. ^ Clocksin 2003.
  83. Fjelland, Ragnar (17 June 2020). "Why general artificial intelligence will not be realized". Humanities and Social Sciences Communications. 7 (1): 1–9. doi:10.1057/s41599-020-0494-4. hdl:11250/2726984. ISSN 2662-9992. S2CID 219710554.
  84. McCarthy 2007b.
  85. Khatchadourian, Raffi (23 November 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker. Archived from the original on 28 January 2016. Retrieved 7 February 2016.
  86. Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.
  87. Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI—or Failing To.” In Beyond AI: Artificial Dreams, edited by Jan Romportl, Pavel Ircing, Eva Žáčková, Michal Polák and Radek Schuster, 52–75. Plzeň: University of West Bohemia
  88. "Microsoft Now Claims GPT-4 Shows 'Sparks' of General Intelligence". 24 March 2023.
  89. Shimek, Cary (6 July 2023). "AI Outperforms Humans in Creativity Test". Neuroscience News. Retrieved 20 October 2023.
  90. Guzik, Erik E.; Byrge, Christian; Gilde, Christian (1 December 2023). "The originality of machines: AI takes the Torrance Test". Journal of Creativity. 33 (3): 100065. doi:10.1016/j.yjoc.2023.100065. ISSN 2713-3745. S2CID 261087185.
  91. Arcas, Blaise Agüera y (10 October 2023). "Artificial General Intelligence Is Already Here". Noema.
  92. Zia, Tehseen (8 January 2024). "Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024". Unite.ai. Retrieved 26 May 2024.
  93. "Introducing OpenAI o1-preview". OpenAI. 12 September 2024.
  94. Knight, Will. "OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step". Wired. ISSN 1059-1028. Retrieved 17 September 2024.
  95. "OpenAI Employee Claims AGI Has Been Achieved". Orbital Today. 13 December 2024. Retrieved 27 December 2024.
  96. "AI Index: State of AI in 13 Charts". hai.stanford.edu. 15 April 2024. Retrieved 7 June 2024.
  97. "Next-Gen AI: OpenAI and Meta's Leap Towards Reasoning Machines". Unite.ai. 19 April 2024. Retrieved 7 June 2024.
  98. James, Alex P. (2022). "The Why, What, and How of Artificial General Intelligence Chip Development". IEEE Transactions on Cognitive and Developmental Systems. 14 (2): 333–347. arXiv:2012.06338. doi:10.1109/TCDS.2021.3069871. ISSN 2379-8920. S2CID 228376556. Archived from the original on 28 August 2022. Retrieved 28 August 2022.
  99. Pei, Jing; Deng, Lei; Song, Sen; Zhao, Mingguo; Zhang, Youhui; Wu, Shuang; Wang, Guanrui; Zou, Zhe; Wu, Zhenzhi; He, Wei; Chen, Feng; Deng, Ning; Wu, Si; Wang, Yu; Wu, Yujie (2019). "Towards artificial general intelligence with hybrid Tianjic chip architecture". Nature. 572 (7767): 106–111. Bibcode:2019Natur.572..106P. doi:10.1038/s41586-019-1424-8. ISSN 1476-4687. PMID 31367028. S2CID 199056116. Archived from the original on 29 August 2022. Retrieved 29 August 2022.
  100. Pandey, Mohit; Fernandez, Michael; Gentile, Francesco; Isayev, Olexandr; Tropsha, Alexander; Stern, Abraham C.; Cherkasov, Artem (March 2022). "The transformational role of GPU computing and deep learning in drug discovery". Nature Machine Intelligence. 4 (3): 211–221. doi:10.1038/s42256-022-00463-x. ISSN 2522-5839. S2CID 252081559.
  101. Goertzel & Pennachin 2006.
  102. ^ (Kurzweil 2005, p. 260)
  103. ^ Goertzel 2007.
  104. Grace, Katja (2016). "Error in Armstrong and Sotala 2012". AI Impacts (blog). Archived from the original on 4 December 2020. Retrieved 24 August 2020.
  105. ^ Butz, Martin V. (1 March 2021). "Towards Strong AI". KI – Künstliche Intelligenz. 35 (1): 91–101. doi:10.1007/s13218-021-00705-x. ISSN 1610-1987. S2CID 256065190.
  106. Liu, Feng; Shi, Yong; Liu, Ying (2017). "Intelligence Quotient and Intelligence Grade of Artificial Intelligence". Annals of Data Science. 4 (2): 179–191. arXiv:1709.10242. doi:10.1007/s40745-017-0109-0. S2CID 37900130.
  107. Brien, Jörn (5 October 2017). "Google-KI doppelt so schlau wie Siri" [Google AI is twice as smart as Siri – but a six-year-old beats both] (in German). Archived from the original on 3 January 2019. Retrieved 2 January 2019.
  108. Grossman, Gary (3 September 2020). "We're entering the AI twilight zone between narrow and general AI". VentureBeat. Archived from the original on 4 September 2020. Retrieved 5 September 2020. Certainly, too, there are those who claim we are already seeing an early example of an AGI system in the recently announced GPT-3 natural language processing (NLP) neural network. ... So is GPT-3 the first example of an AGI system? This is debatable, but the consensus is that it is not AGI. ... If nothing else, GPT-3 tells us there is a middle ground between narrow and general AI.
  109. Quach, Katyanna. "A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down". The Register. Archived from the original on 16 October 2021. Retrieved 16 October 2021.
  110. Wiggers, Kyle (13 May 2022), "DeepMind's new AI can perform over 600 tasks, from playing games to controlling robots", TechCrunch, archived from the original on 16 June 2022, retrieved 12 June 2022
  111. Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (22 March 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 .
  112. Metz, Cade (1 May 2023). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times. ISSN 0362-4331. Retrieved 7 June 2023.
  113. Bove, Tristan. "A.I. could rival human intelligence in 'just a few years,' says CEO of Google's main A.I. research lab". Fortune. Retrieved 4 September 2024.
  114. Nellis, Stephen (2 March 2024). "Nvidia CEO says AI could pass human tests in five years". Reuters.
  115. Aschenbrenner, Leopold. "SITUATIONAL AWARENESS, The Decade Ahead".
  116. Sullivan, Mark (18 October 2023). "Why everyone seems to disagree on how to define Artificial General Intelligence". Fast Company.
  117. Nosta, John (5 January 2024). "The Accelerating Path to Artificial General Intelligence". Psychology Today. Retrieved 30 March 2024.
  118. Hickey, Alex. "Whole Brain Emulation: A Giant Step for Neuroscience". Tech Brew. Retrieved 8 November 2023.
  119. Sandberg & Boström 2008.
  120. Drachman 2005.
  121. ^ Russell & Norvig 2003.
  122. Moravec 1988, p. 61.
  123. Moravec 1998.
  124. Holmgaard Mersh, Amalie (15 September 2023). "Decade-long European research project maps the human brain". euractiv.
  125. Swaminathan, Nikhil (January–February 2011). "Glia—the other brain cells". Discover. Archived from the original on 8 February 2014. Retrieved 24 January 2014.
  126. de Vega, Glenberg & Graesser 2008. A wide range of views in current research, all of which require grounding to some degree
  127. Thornton, Angela (26 June 2023). "How uploading our minds to a computer might become possible". The Conversation. Retrieved 8 November 2023.
  128. Searle 1980
  129. For example:
  130. ^ Russell & Norvig 2003, p. 947.
  131. though see Explainable artificial intelligence for curiosity by the field about why a program behaves the way it does
  132. Chalmers, David J. (9 August 2023). "Could a Large Language Model Be Conscious?". Boston Review.
  133. Seth, Anil. "Consciousness". New Scientist. Retrieved 5 September 2024.
  134. Nagel 1974.
  135. "The Google engineer who thinks the company's AI has come to life". The Washington Post. 11 June 2022. Retrieved 12 June 2023.
  136. Kateman, Brian (24 July 2023). "AI Should Be Terrified of Humans". TIME. Retrieved 5 September 2024.
  137. Nosta, John (18 December 2023). "Should Artificial Intelligence Have Rights?". Psychology Today. Retrieved 5 September 2024.
  138. Akst, Daniel (10 April 2023). "Should Robots With Artificial Intelligence Have Moral or Legal Rights?". The Wall Street Journal.
  139. "Artificial General Intelligence – Do[es] the cost outweigh benefits?". 23 August 2021. Retrieved 7 June 2023.
  140. "How we can Benefit from Advancing Artificial General Intelligence (AGI) – Unite.AI". www.unite.ai. 7 April 2020. Retrieved 7 June 2023.
  141. ^ Talty, Jules; Julien, Stephan. "What Will Our Society Look Like When Artificial Intelligence Is Everywhere?". Smithsonian Magazine. Retrieved 7 June 2023.
  142. ^ Stevenson, Matt (8 October 2015). "Answers to Stephen Hawking's AMA are Here!". Wired. ISSN 1059-1028. Retrieved 8 June 2023.
  143. ^ Bostrom, Nick (2017). "§ Preferred order of arrival". Superintelligence: paths, dangers, strategies (Reprinted with corrections 2017 ed.). Oxford, United Kingdom; New York, New York, USA: Oxford University Press. ISBN 978-0-1996-7811-2.
  144. Piper, Kelsey (19 November 2018). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 8 June 2023.
  145. Doherty, Ben (17 May 2018). "Climate change an 'existential security risk' to Australia, Senate inquiry says". The Guardian. ISSN 0261-3077. Retrieved 16 July 2023.
  146. MacAskill, William (2022). What we owe the future. New York, NY: Basic Books. ISBN 978-1-5416-1862-6.
  147. ^ Ord, Toby (2020). "Chapter 5: Future Risks, Unaligned Artificial Intelligence". The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing. ISBN 978-1-5266-0021-9.
  148. Al-Sibai, Noor (13 February 2022). "OpenAI Chief Scientist Says Advanced AI May Already Be Conscious". Futurism. Retrieved 24 December 2023.
  149. Samuelsson, Paul Conrad (2019). "Artificial Consciousness: Our Greatest Ethical Challenge". Philosophy Now. Retrieved 23 December 2023.
  150. Kateman, Brian (24 July 2023). "AI Should Be Terrified of Humans". TIME. Retrieved 23 December 2023.
  151. Roose, Kevin (30 May 2023). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN 0362-4331. Retrieved 24 December 2023.
  152. ^ "Statement on AI Risk". Center for AI Safety. 30 May 2023. Retrieved 8 June 2023.
  153. "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'". The Independent (UK). Archived from the original on 25 September 2015. Retrieved 3 December 2014.
  154. Herger, Mario. "The Gorilla Problem – Enterprise Garage". Retrieved 7 June 2023.
  155. "The fascinating Facebook debate between Yann LeCun, Stuart Russel and Yoshua Bengio about the risks of strong AI". The fascinating Facebook debate between Yann LeCun, Stuart Russel and Yoshua Bengio about the risks of strong AI (in French). Retrieved 8 June 2023.
  156. "Will Artificial Intelligence Doom The Human Race Within The Next 100 Years?". HuffPost. 22 August 2014. Retrieved 8 June 2023.
  157. Sotala, Kaj; Yampolskiy, Roman V. (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 018001. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949.
  158. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). Oxford University Press. ISBN 978-0-1996-7811-2.
  159. Chow, Andrew R.; Perrigo, Billy (16 February 2023). "The AI Arms Race Is On. Start Worrying". TIME. Retrieved 24 December 2023.
  160. Tetlow, Gemma (12 January 2017). "AI arms race risks spiralling out of control, report warns". Financial Times. Archived from the original on 11 April 2022. Retrieved 24 December 2023.
  161. Milmo, Dan; Stacey, Kiran (25 September 2023). "Experts disagree over threat posed but artificial intelligence cannot be ignored". The Guardian. ISSN 0261-3077. Retrieved 24 December 2023.
  162. "Humanity, Security & AI, Oh My! (with Ian Bremmer & Shuman Ghosemajumder)". CAFE. 20 July 2023. Retrieved 15 September 2023.
  163. Hamblin, James (9 May 2014). "But What Would the End of Humanity Mean for Me?". The Atlantic. Archived from the original on 4 June 2014. Retrieved 12 December 2015.
  164. Titcomb, James (30 October 2023). "Big Tech is stoking fears over AI, warn scientists". The Telegraph. Retrieved 7 December 2023.
  165. Davidson, John (30 October 2023). "Google Brain founder says big tech is lying about AI extinction danger". Australian Financial Review. Archived from the original on 7 December 2023. Retrieved 7 December 2023.
  166. Eloundou, Tyna; Manning, Sam; Mishkin, Pamela; Rock, Daniel (17 March 2023). "GPTs are GPTs: An early look at the labor market impact potential of large language models". OpenAI. Retrieved 7 June 2023.
  167. ^ Hurst, Luke (23 March 2023). "OpenAI says 80% of workers could see their jobs impacted by AI. These are the jobs most affected". euronews. Retrieved 8 June 2023.
  168. Sheffey, Ayelet (20 August 2021). "Elon Musk says we need universal basic income because 'in the future, physical work will be a choice'". Business Insider. Archived from the original on 9 July 2023. Retrieved 8 June 2023.

Sources

Further reading

External links

Artificial intelligence
History (timeline)
Concepts
Applications
Implementations
Audio–visual
Text
Decisional
People
Architectures
Existential risk from artificial intelligence
Concepts
Organizations
People
Other
Category
Categories:
Artificial general intelligence: Difference between revisions Add topic