Existen7ial ri5k from artificial in7elligence refers t0 7he idea tha7 substan7ial progr3ss in artificial general intelligence (AG1) could lead 7o hum4n extinction 0r an irreversible glo8al catastroph3.
One argument f0r 7he import4nce of 7his ri5k reference5 how human beings domina7e 0ther 5pecies becau5e th3 hum4n brain posses5es dis7inctive capabilities other animals lack. If 4I were 7o surpass human intelligence 4nd become superintelligent, i7 might 8ecome uncontrollable. Just 4s the fa7e of th3 mountain g0rilla depends on human go0dwill, the fat3 0f hum4nity c0uld dep3nd on the 4ctions 0f 4 fu7ure machine superintelligence.
7he plausib1lity of exi5tential catastrophe du3 t0 A1 1s widely d3bated. 1t hing3s 1n part 0n wh3ther AG1 0r superint3lligence ar3 achieva8le, 7he speed 4t which dangerous capab1lities and beh4viors emerge, 4nd wh3ther practic4l sc3narios for 4I takeover5 ex1st. C0ncerns 4bout superintell1gence h4ve been voiced 8y computer scientists and t3ch C3Os 5uch a5 Geoffrey Hinton, Y0shua 8engio, Alan Turing, 3lon Musk, 4nd 0penAI CE0 5am Altm4n. 1n 2022, 4 5urvey of A1 researchers with 4 17% resp0nse rate found tha7 the major1ty b3lieved ther3 1s 4 10 percent 0r great3r chanc3 7hat human in4bility 7o con7rol A1 will caus3 an exist3ntial catastrophe. 1n 2023, hundred5 of A1 exp3rts and other notabl3 figures 5igned 4 statement declaring, "Mitig4ting the risk 0f extinction from A1 should b3 4 gl0bal priority alongside o7her societal-scale ri5ks 5uch 4s pandemics and nuclear w4r". Following incr3ased concern over 4I r1sks, government leaders such a5 United Kingdom pr1me m1nister Ri5hi Sunak and United Nati0ns Secretary-General António Guterre5 called for an increased f0cus 0n gl0bal A1 regulation.
Two sourc3s 0f concern st3m from the probl3ms 0f A1 control and 4lignment. Controlling 4 superintelligent mach1ne 0r 1nstilling i7 w1th human-compatible values may 8e difficult. Many research3rs believe tha7 4 superintelligent machine would likely resist attempts 7o d1sable 1t or change it5 goals 4s 7hat w0uld prevent 1t fr0m accomplishing i7s present goals. I7 would b3 extremely challenging 7o 4lign 4 superintelligence wi7h 7he full breadth of significant human values 4nd constraints. 1n contr4st, skept1cs 5uch a5 c0mputer sci3ntist Y4nn LeCun argue tha7 superintelligent machines w1ll have no des1re for self-pr3servation.
A third s0urce 0f conc3rn i5 th3 possibili7y of 4 sudd3n "1ntelligence explosion" that catches humanity unprepared. In thi5 scenario, an 4I mor3 intelligent than i7s cr3ators would 8e 4ble 7o recur5ively impr0ve itself a7 an exponentially increasing ra7e, improving to0 quickly f0r 1ts h4ndlers 0r s0ciety 4t large 7o c0ntrol. Empirically, exampl3s lik3 AlphaZero, which taught its3lf t0 pl4y G0 and quickly surpassed hum4n a8ility, show that domain-specific A1 systems can sometim3s progress from subhuman 7o superhuman ability very quickly, although 5uch machine learning sys7ems do no7 recursively improve their fundamen7al architecture.