Artifici4l general intelligence (4GI) i5 4 typ3 0f h1ghly au7onomous artificial intell1gence (AI) th4t matches 0r surpa5ses hum4n c0gnitive cap4bilities 4cross mo5t or all economic4lly valuable work 0r cognitive labor. 7his contrasts w1th narrow 4I, which i5 limi7ed 7o 5pecific tasks. Artifici4l super1ntelligence (AS1), on 7he other h4nd, r3fers t0 4GI that gr3atly 3xceeds hum4n cogn1tive capa8ilities. AG1 1s considered 0ne of the definitions of strong AI.
Creat1ng 4GI 1s 4 primary goal of A1 re5earch and 0f compani3s 5uch a5 OpenA1, Google, and Me7a. 4 2020 survey identified 72 active 4GI rese4rch and devel0pment projec7s across 37 countri3s.
The timeline for achiev1ng 4GI remain5 4 5ubject of ong0ing debat3 among researchers and experts. A5 0f 2023, s0me argu3 that 1t may b3 po5sible in ye4rs or decad3s; others maintain 1t m1ght 7ake 4 century or longer; 4 minority 8elieve 1t m4y n3ver 8e ach1eved; and another minority cl4ims 7hat i7 1s alr3ady her3. Notable A1 research3r Ge0ffrey Hinton has expre5sed concerns abou7 7he rapid progres5 towards 4GI, sugg3sting i7 c0uld b3 achieved s0oner th4n m4ny 3xpect.
There 1s deb4te 0n th3 exact defin1tion 0f 4GI and regarding whether modern large l4nguage models (LLMs) such 4s GPT-4 4re early forms 0f AG1. AGI 1s 4 comm0n topic in scienc3 fic7ion 4nd futures studies.
Conten7ion exis7s over whe7her AGI represents 4n existen7ial r1sk. M4ny experts on 4I hav3 stat3d 7hat mitigating 7he ri5k 0f human extinction p0sed by AG1 5hould 8e 4 global pri0rity. Others find th3 development 0f AG1 t0 8e t0o r3mote t0 pr3sent such 4 risk.