“AI教父”離開谷歌,前方危險預(yù)警
原文鏈接:https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
翻譯:小野母喵 ??
僅供學(xué)習(xí),侵刪。文末附原文
“AI教父”離開谷歌,前方危險預(yù)警
半個世紀以來,杰弗里·辛頓(Geoffrey Hinton)培育了ChatGPT等聊天機器人的核心技術(shù)?,F(xiàn)在他擔(dān)心這會造成嚴重的傷害。
杰弗里·辛頓(Geoffrey Hinton)是人工智能的先驅(qū)。2012年,辛頓博士和他在多倫多大學(xué)的兩名研究生創(chuàng)造的技術(shù)成為人工智能系統(tǒng)的智能基礎(chǔ),科技巨頭們相信這項技術(shù)將會是它們未來發(fā)展的關(guān)鍵。
然而,本周一,辛頓博士正式加入了越來越多的批評人士的行列,這些批評人士認為,這些科技公司正朝著危險前進,它們積極地開發(fā)基于生成式人工智能(generative artificial intelligence)的產(chǎn)品,生成式人工智能正是ChatGPT等流行的聊天機器人的驅(qū)動技術(shù)。
辛頓博士稱,他已經(jīng)辭掉了谷歌的工作。他在谷歌工作了10多年,并成為了該領(lǐng)域最受尊敬的意見領(lǐng)袖之一?,F(xiàn)在他可以自由地談?wù)撊斯ぶ悄艿娘L(fēng)險問題了。他說,他內(nèi)心有一部分自我現(xiàn)在對自己一生的工作感到后悔。
“我用一個慣用的借口安慰自己說:如果我不做這項工作,也會有其他人來做的。”辛頓博士上周在多倫多家中的餐廳接受采訪時說,那里距離他和他的學(xué)生取得AI技術(shù)突破的地方只有幾步之遙。
辛頓博士從人工智能的奠基人到末日預(yù)言者的歷程,標志著科技行業(yè)在幾十年來最重要的拐點上的一個非凡時刻。業(yè)界領(lǐng)袖認為,新的人工智能系統(tǒng)可能與20世紀90年代初推出的網(wǎng)絡(luò)瀏覽器一樣重要,并可能導(dǎo)致從藥物研究到教育等領(lǐng)域的突破。
但是,讓許多業(yè)內(nèi)人士感到不安的是,人工智能系統(tǒng)正在向外界釋放危險的東西。生成式人工智能已經(jīng)成為制造錯誤信息的工具。很快,它可能會造就失業(yè)風(fēng)險??萍冀缱畲蟮膿?dān)憂者認為,在未來的某個時候,人工智能系統(tǒng)可能會對人類構(gòu)成威脅。
“很難想象你如何能阻止壞人利用它(AI系統(tǒng))做壞事?!毙令D博士說。
在舊金山的初創(chuàng)公司OpenAI于3月發(fā)布了新版本的ChatGPT之后,1000多名技術(shù)領(lǐng)袖和研究人員簽署了一封公開信,呼吁暫停新系統(tǒng)開發(fā)六個月,因為人工智能技術(shù)對 "社會和人類構(gòu)成了深刻的威脅”。
幾天后,有40年歷史的學(xué)術(shù)團體國際先進人工智能協(xié)會(Association for the Advancement of Artificial Intelligence)的19位現(xiàn)任和前任領(lǐng)導(dǎo)人發(fā)表了他們自己的公開信,警告人工智能的風(fēng)險。微軟(Microsoft)首席科學(xué)官埃里克·霍維茨(Eric Horvitz)也是協(xié)會的一員。微軟已將OpenAI的技術(shù)應(yīng)用于一系列產(chǎn)品,包括必應(yīng)(Bing)搜索引擎。
常被稱為 "人工智能教父 "的辛頓博士沒有簽署這兩封公開信,他說在辭職之前不想公開批評谷歌或其他公司。他上個月通知公司他要辭職,周四,他與谷歌母公司Alphabet的首席執(zhí)行官桑達爾·皮查伊(Sundar Pichai)通了電話。他拒絕公開討論他與皮查伊的談話細節(jié)。
谷歌的首席科學(xué)家杰夫·迪恩(Jeff Dean)在一份聲明中表示:“我們?nèi)匀恢铝τ趯θ斯ぶ悄懿扇∝撠?zé)任的態(tài)度。我們在不斷學(xué)習(xí)理解新出現(xiàn)的風(fēng)險的同時,也在大膽創(chuàng)新?!?/p>
現(xiàn)年75歲的辛頓博士是英國僑民,一生從事學(xué)術(shù)工作,他對人工智能發(fā)展和使用的個人信念推動了他的職業(yè)生涯。1972年,作為愛丁堡大學(xué)(University of Edinburgh)的一名研究生,辛頓博士接受了一種名為“神經(jīng)網(wǎng)絡(luò)”的觀點。神經(jīng)網(wǎng)絡(luò)是一種數(shù)學(xué)系統(tǒng),這種系統(tǒng)通過分析數(shù)據(jù)來學(xué)習(xí)技能。當(dāng)時,很少有研究人員相信這個觀點。但這成了辛頓博士畢生的事業(yè)。
20世紀80年代,辛頓博士是卡內(nèi)基梅隆大學(xué)的計算機科學(xué)教授,但他離開大學(xué)前往加拿大,因為他說他不愿意接受五角大樓的資助。當(dāng)時,美國的大多數(shù)人工智能研究是由美國國防部資助的。辛頓博士堅決反對在戰(zhàn)場上使用人工智能--他稱之為 "機器人士兵”。
2012年,辛頓博士和他在多倫多的兩名學(xué)生伊爾亞·蘇茨克維(Ilya Sutskever)和亞歷克斯·克里斯基(Alex Krishevsky)建立了一個神經(jīng)網(wǎng)絡(luò),這個神經(jīng)網(wǎng)絡(luò)可以分析成千上萬的照片,并自學(xué)識別常見物體,比如花、狗和汽車。
谷歌斥資4400萬美元收購了一家由辛頓博士和他的兩個學(xué)生創(chuàng)辦的公司。他們的系統(tǒng)催生了越來越強大的技術(shù),包括ChatGPT和谷歌Bard等新型聊天機器人。Sutskever后來成為OpenAI的首席科學(xué)家。2018年,辛頓博士和另外兩位長期合作者(楊立昆和約書亞·本吉奧)因在神經(jīng)網(wǎng)絡(luò)方面的工作獲得了圖靈獎,這一獎項通常被稱為“計算界的諾貝爾獎”。
大約在同一時間,谷歌、OpenAI和其他公司開始構(gòu)建從大量數(shù)字文本中學(xué)習(xí)的神經(jīng)網(wǎng)絡(luò)。辛頓博士認為,這是機器理解和生成語言的一種強大方式,但它比人類處理語言的方式要差。
去年,隨著谷歌和OpenAI構(gòu)建了使用大量數(shù)據(jù)的系統(tǒng),他的觀點發(fā)生了變化。他仍然認為這些系統(tǒng)在某些方面不如人腦,但他認為它們在其他方面正在使人類的智能黯然失色。他說:“也許這些系統(tǒng)中發(fā)生的事情,實際上比大腦中發(fā)生的事情要好得多?!?/p>
他認為,隨著公司對人工智能系統(tǒng)的改進,它們會變得越來越危險?!翱纯此ˋI)五年前和現(xiàn)在的情況,”他在談到人工智能技術(shù)時說。“如果按照這樣的速度發(fā)展下去。會很可怕。”
直到去年,他說,谷歌一直是這項技術(shù)的“合格管理者”,小心謹慎地避免發(fā)布可能會造成傷害的東西。但是,現(xiàn)在微軟已經(jīng)通過增加聊天機器人改進了必應(yīng)搜索引擎,來挑戰(zhàn)谷歌的核心業(yè)務(wù),谷歌正在竭力部署同樣類型的技術(shù)。這些科技巨頭陷入了一場可能無法停止的競爭,辛頓博士說。
他目前的擔(dān)憂是互聯(lián)網(wǎng)上將被大量虛假的照片、視頻和文本淹沒,普通人將“不再能夠知道什么是真實的。”
他還擔(dān)心,人工智能技術(shù)會在一段時間內(nèi)顛覆就業(yè)市場?,F(xiàn)在,像ChatGPT這樣的聊天機器人往往是對人類工作者的補充,但它們可以取代律師助理、個人助理、翻譯和其他處理機械工作的人。他說:“這會減輕乏味的工作,但可能不僅僅如此。”
未來,他擔(dān)心技術(shù)的未來版本會對人類構(gòu)成威脅,因為AI會從它們分析的大量數(shù)據(jù)中學(xué)習(xí)出意想不到的行為。他說,當(dāng)個人和公司不僅允許人工智能系統(tǒng)生成自己的計算機代碼,而且允許實際運行那些代碼時,這會成為一個問題。而且他擔(dān)心有一天,真正的自主武器--那些殺手機器人--會成為現(xiàn)實。
“這些東西(AI)實際上可能比人更聰明的想法——一部分人相信這一點,”他說,“但其中大多數(shù)人也認為這還遠遠沒有到來。我也認為這還很遠,大概需要30到50年甚至更長時間。但顯然,我現(xiàn)在不再這樣認為了?!?/p>
包括他的學(xué)生和同事在內(nèi)的許多其他專家表示,這種威脅是假設(shè)性的。但辛頓博士認為,谷歌、微軟和其他公司之間的競爭將升級為全球競爭,如果沒有某種全球監(jiān)管,這種競爭將不會停止。
但他說,這可能是不可能的,與核武器不同,我們無法知道公司或國家是否在秘密研究該技術(shù)。最好的希望是世界頂尖的科學(xué)家合作研究控制該技術(shù)的方法。"我認為在他們弄清楚他們能否控制它(AI)之前,他們不應(yīng)該進一步擴大這個規(guī)模?!彼f。
辛頓博士說,當(dāng)人們過去問他如何能夠去研究有潛在危險的技術(shù)時,他會引用曾領(lǐng)導(dǎo)美國制造原子彈的羅伯特·奧本海默(Robert Oppenheimer)的話:“當(dāng)你看到某種技術(shù)上令人興奮的東西時,你就會去做?!?/p>
但現(xiàn)在他不這么說了。
附原文:
‘The Godfather of A.I.’ Leaves Google and Warns of Danger AheadFor half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto?created technology?that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released?a new version of ChatGPT in March, more than 1,000 technology leaders and researchers?signed an open letter?calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society,?released their own letter?warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products,?including its Bing search engine.
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A?neural network?is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.
Google?spent $44 million?to acquire a company started by Dr. Hinton and his two students. And their?system?led to the creation of increasingly powerful technologies, including new chatbots like?ChatGPT?and?Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators?received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.
Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business —?Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false?photos,?videos?and?text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they?often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.