AI could pose “risk of extinction” akin to nuclear war and pandemics, experts say

Exposing the Big Game's avatarThe Extinction Chronicles

BY AIMEE PICCHI

MAY 30, 2023 / 12:44 PM / MONEYWATCH

https://www.cbsnews.com/sacramento/news/nvidia-1-trillion-market-value-becoming-ai-first-chip-company-to-do-so/

https://www.cbsnews.com/embed/video/?v=00bf50e324fabc17de1ff53c19325a331685481868&usprivacy=null#zVhbb9s6Ev4rgoD6YR3aulqSD4xdJ0560m2TIEn7EgcGRVE2G90gSnbc1P99Zyj5lnbb4mB7sGhjUcMZcjjzcS560Wld5UVC1%2FqwKmt%2Bogt5licJLaQIE64PY5pIoEZ8KRi8wkA%2BgYB%2Boi9FxHN9%2BKKLiqdSHz686NW6QJ5lHsG8iGBoOzQOXJ%2BRkPs2cWyLE0rDAYkHNguoEwdegLxp8XzL40uUCK%2BT1aexjN9Z7vjeqeoP9%2BaluDLkl8%2FvntaL4Ba481LMRXYeiUrkGYjUEogyqecwjvMy5SWZ5%2Fk84YQ%2Fc1ZXYslJmpM5XUW0IitaZpJUCxjSshKxYIImRGQVTxIx5xnjhOV1EpGl4CuyqFMK7FQSyeoU9qlEhXbRL9RG2lu1kbbbSJMFp09Sy%2BtKo3MqMllp48ter6ejEZ9AEAZxnSR%2FaR2QZTTLM8Focv8XF6hEymVF00IfmgPfCTzTtwLDMEC%2FuqSNSe2BvX99T0OewEbGYGjYsEDSvGdwCPBySudcIgwkem9RVYUcTvvTPpWSV9LqsVBmfCVhx0qwHsvTaX9Rh9O%2BmPbLad8yLHvaN1wYwZ8TxaFFPYcw16TEGQSchA6lJOYDI%2FB8bnHuTfsV%2BCTMqEim%2FYFjPNsDY9oPzdDzmGmbACrqxb4Zmk5k8CD2bWvgG%2Ba0j3qQuJZwIPJ7QUJ2ChIzCAa2aZJWz97nYg4GXHzPUubvtZRp%2BcazZ4GpTG5TCiYx%2FMCMDWZ7ge1FRsBjx4u4YbP%2FI1NtTvSi5Cj3sUyObCaLBOJRj6bJvCdysGEBVlpa8IdRCfDwkygy7bcLT%2Fv7UHawflHmUQ%2FimJrZeqbxSfsCW4FXFol87RrLMi3XGZim6xoOmFudazBTnCmVFS97qV37222tv39f9CmtYF9aoBnVJZ%2F2n0la8PnH2%2FfAUb8y92q1OlamNfPvRYfyjZCXaZGj%2BrtkBPlHMJV58CI5gUs9y3aJY1gWcRzPJb5PfRJTarl%2BSG2fR%2Fv8UJQQ%2FgjGQLIS1YJ8zhcZiQR74qUEuLNEFJhNFoKXtGQLSIsPjYyGMhrKaCij7WT0k58waM2ij8BHS55V96j%2BFU0xeN8oyfud5DuUnBwsvVPkdN2IPPxc5scM2tlWHZoku%2BP9V5Nsj%2Fczmz3CZeWFkHkESppgb06lys4mGp%2FPUzh5M1EXcPek3LkT1BgzBpTTJGdPu8QiP0pe3tWhZKUIwYVbdlnkmczLLd%2BCP48nPKZ1gog2TuCfrqine6odnDjuiRvABI0QOPGq%2BZ0lIns6QroXG57XW%2FbiVVqmvYxXEFuiaX8OF%2BqfTIpo9AIZDwRxvOlQma1GrmVA4OowirP4u%2BnECZ2P3linNGUhPHjKSnhEVRnBQyZVoR5qDuoqfMQhVXPrjMEjq5UA5ZXopLwqRyZc9c5%2BLwgTcTt%2BY4%2Bb1D8ZX57djS9nV%2Fmnm%2FHlpFMsy2z0cDu%2Bmlx%2FeOyAxYvRMqWFiXtCOHA68kD5ZXHA%2FMcMDiihrJsVkNxHLxAmZngqJGeCgxeyWUMb3RhnxpntjSfEMs4c4pxfeGR8Ybnk9Pz07NQcTC5897yznNdoG%2FXYdOSX0QtWnLyETb7wTWcB13ukXACHeWNdwP9XAadTYGkJYvhoFHni69kqLyM5elHxYNNpatTRS%2FPcdGCPCgPUaMVDJSLkDEjgQfVslqnlDOC9pGwNqzcDWIkKQosqH9lQF3VolIqSJlf5kqchL8dJsaBglIa66czFEkz39vr67fvz2eXV3f3t%2BfjD7NPl5Px6dnV9dXb%2B2HmJU9iGljSVmz90FQdoqoomhBHg7xBUCFLwjD5sXIMVX8OjoIXhG8AF710EVxeh1UVgdRFWXQRVFyHVRUB1EU5dBFMXoYSFNoAJbiGg6UQ%2F3AMBBWs2r8PvAgr1BkgB1xYnQEFYYcWPwOoqWOFtP1IfoXUsdAQvPFkLsHbqGGIYJH8FZJhPEV%2B4XgM01OQLvh6CDWMDwO1H2Q0P2nYyLeBaxXaQw5kGdPpBa7QFHsq30AMqgK8Vb2CnvN0AsKXvIah2bEGIK7cwxAgGQFTR61soKgO2YAQWhCOa%2B0eAVIXVXsVtT7cBclVS9iSyuQJnnkqWl1yNoU4BJVzDHwAiHZA6O7272luMDYD0DwzJ8JLJmaxmyrdo1IZCixlFza6m%2FfGOh6E9lsw0dxTlm8O1Uat502lCpZBVpcB%2B4wHyV5iArolACZUbNpg1LkCF820ealNGWj%2FflHnBy2r9b45WthyPR9SFAsmLTNcJnMC0bJvaA33zqC4JVR0NGKPi8zWOgbbIo6Zm4lmktNq2nJxWdckjwrBhUsVGCx9QH5IVJr5934heLAQMdkXU8wIKOgZNN2dNJbYTgwLr1dpQEr1ykpDnGYVWPWpTLVZZLOHjGvQtsaN%2BAP8yi4ZQW9OIRsSJBjYJbcMk3PEZj2LGTStU9cCh6LbaoJD2uQZYZwtxwKR69ciMqGP5PglCyqG790wS%2BDSAH5cxN3Bsk3p7lW7qcAJnQPND%2FCGGS2xDMwdDxxlag9ea4x072no3f8dZnkW0XO8YH%2FZqbZtiKjRVVmpQOnJtOp3qpZBPWh5Dg1yJrDE0UDUKdtSqXIMgmXBaalCvajQDOfjhqWDyBCQQOVKTdL3X4n7rYahks2rWUnfhoJ3efy0RcsKXeHQV%2FudbBrweM%2FyAgsEKnq8%2FagDpT4Gl74OeAiTWK1qxBboBZ%2B4awADzwVwz1S6P4RmqJfwc0MQhFbWU7xw%2F8l2fMeK5fkRMk1skiEyLGIbpx9ynhhEaW4G2WD3aJRGpwKvqtDteq48yW8W3vsrrkn0jGpZg3Js2%2Fihl2ujbZIW90XbfOm5otVBXpumAIC6iO0kek707VasBl0J1DTvJ143jN53Mr64IVaoq3OUrX5zsD%2Fyd5ka5agvYvXwtv524jNRU7BkR9%2B2QhBaPG79QOzaIYQWm6Tum70Ed%2B9iAqPXLrx9idrOXaq0w%2FHXpLbAm24x3DK%2BvX7%2FuXYe82IfA6W6bGgf2E9tPHDCJxQgtn6AsyeZnSV5H1%2BVcYcE0JrZpWa5ruRcXrhcYYycwJs7kX%2BMoDwFm%2BAXldWbS29Skt7npOJv%2Fem6C4EuPM9HhSpvmWMf2a896pvLNAbHiNBF1ihpicSWSg%2Bn2%2FDxbqo4U8soGs04k6E0uVQBo7y%2BEpKgNGP8zIKus89PV%2Fq4Gf7P5Dw%3D%3D

Artificial intelligence could pose a “risk of extinction” to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a “global priority,” according to anopen lettersigned by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the “godfather” of AI.

The one-sentence open letter, issued by the nonprofit Center for AI Safety, is both brief and ominous, without extrapolating how the more than 300 signees foresee AI developing into an existential threat to humanity.

In an email to CBS MoneyWatch, Dan Hendrycks, the director of the Center for AI Safety, wrote that there are “numerous pathways to societal-scale risks from AI.”

“For example, AIs could be used by malicious actors to design novel bioweapons more lethal than natural pandemics,” Hendrycks wrote. “Alternatively, malicious actors could…

View original post 376 more words

Leave a comment