外刊| 經(jīng)濟學(xué)人 如何明智地?fù)?dān)憂AI

文章簡介:文章討論了人工智能(AI)的快速進(jìn)展引起的興奮和擔(dān)憂,并探討了人們應(yīng)該有多大程度的關(guān)注。文章介紹了新的“大型語言模型”(LLM)的能力,以及這些能力的發(fā)展方向。文章討論了人工智能可能會威脅到工作崗位、事實準(zhǔn)確性、聲譽以及人類的存在本身,并探討了政府應(yīng)該如何對其進(jìn)行監(jiān)管。最后,文章提出需要平衡人工智能的優(yōu)勢和風(fēng)險,并準(zhǔn)備好隨時適應(yīng)AI。
?

Leaders
社論
How to worry wisely about AI
如何明智地?fù)?dān)憂人工智能
Rapid progress in AI is arousing fear as well as excitement. How concerned should you be?
人工智能的快速發(fā)展不僅引起了興奮,也引起了恐懼。你應(yīng)該有多擔(dān)憂呢?
?
“SHOULD WE AUTOMATE away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an NGO. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (AI), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in AI has sparked anxiety about the potential dangers of the technology.
?
“我們應(yīng)該自動化所有工作,包括那些有意義的工作嗎?我們應(yīng)該開發(fā)非人類思維,可能最終會超過我們的人口、比我們更為聰明……并取代我們嗎?我們應(yīng)該冒著失去對我們文明控制的風(fēng)險嗎?”這些問題是一家非政府組織——未來生命研究所——上個月在一封公開信中提出的。這封信呼吁暫停最先進(jìn)的人工智能(AI)的開發(fā)六個月,并由包括埃隆·馬斯克在內(nèi)的技術(shù)界名人簽署。這是迄今為止最顯著的例子,說明快速進(jìn)展的人工智能引發(fā)了人們對該技術(shù)潛在危險的擔(dān)憂。
?
In particular, new “l(fā)arge language models” (LLMs)—the sort that powers ChatGPT, a chatbot made by OpenAI, a startup— have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji.
?
特別是新的“大型語言模型”(LLMs)——類似于由初創(chuàng)公司OpenAI制作的聊天機器人ChatGPT的模型——在不斷鋪開時,它們的創(chuàng)造者甚至都被其出乎意料的才能驚訝了。這些“新興”的能力包括從解決邏輯難題和編寫計算機代碼,到從用表情符號寫的情節(jié)概要中了解電影情節(jié)。
?
These models stand to transform humans’ relationship with computers, knowledge and even with themselves. Proponents of AI argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that AIs’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.
?
這些模型有望改變?nèi)祟惻c計算機、知識乃至自身的關(guān)系。AI的支持者主張,通過開發(fā)新藥物、設(shè)計新材料來幫助對抗氣候變化或解決核聚變等問題,AI有潛力解決大問題。但對于其他人來說,AI的能力已經(jīng)超越了創(chuàng)造者的理解,這種情況可能會帶來科幻災(zāi)難的場景,即機器比它的發(fā)明者更聰明,經(jīng)常導(dǎo)致致命的后果。
?
This bubbling mixture of excitement and fear makes it hard to weigh the opportunities and risks. But lessons can be learned from other industries, and from past technological shifts. So what has changed to make AI so much more capable? How scared should you be? And what should governments do?
?
這種興奮和恐懼的沸騰混合物使得權(quán)衡機會和風(fēng)險變得困難。但可以從其他行業(yè)以及過去的技術(shù)轉(zhuǎn)變中學(xué)到一些經(jīng)驗教訓(xùn)。那么,是什么已經(jīng)改變,使得人工智能變得如此有能力?你應(yīng)該有多擔(dān)心?政府應(yīng)該采取什么措施?
?
In a special Science section, we explore the workings of LLMs and their future direction. The first wave of modern AI systems, which emerged a decade ago, relied on carefully labelled training data. Once exposed to a sufficient number of labelled examples, they could learn to do things like recognise images or transcribe speech. Today’s systems do not require pre-labelling, and as a result can be trained using much larger data sets taken from online sources. LLMs can, in effect, be trained on the entire internet—which explains their capabilities, good and bad.
?
在這個特別的科學(xué)報道,我們探討了LLMs的工作原理和它們的未來方向?,F(xiàn)代人工智能系統(tǒng)的第一波高潮出現(xiàn)于十年前,它們依賴于經(jīng)過精心標(biāo)記的訓(xùn)練數(shù)據(jù)。一旦接觸到足夠數(shù)量的標(biāo)記樣本,它們就可以學(xué)會識別圖像或轉(zhuǎn)錄語音等操作。如今的系統(tǒng)不需要預(yù)先標(biāo)記,因此可以使用在線源獲取的更大數(shù)據(jù)集進(jìn)行訓(xùn)練。LLMs實際上可以在整個互聯(lián)網(wǎng)上進(jìn)行訓(xùn)練,這解釋了它們的能力,包括了正反兩面。
?
Those capabilities became apparent to a wider public when ChatGPT was released in November. A million people had used it within a week; 100m within two months. It was soon being used to generate school essays and wedding speeches. ChatGPT’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too.
?
當(dāng)ChatGPT于11月發(fā)布時,這些能力開始為更廣泛的公眾所認(rèn)識。一周內(nèi)有100萬人使用它,兩個月內(nèi)有1億人使用。它很快被用來生成學(xué)校論文和婚禮演講。ChatGPT的流行,以及微軟將其納入其搜索引擎Bing中的舉措,促使競爭對手公司也推出了聊天機器人。
?
Some of these produced strange results. Bing Chat suggested to a journalist that he should leave his wife. ChatGPT has been accused of defamation by a law professor. LLMs produce answers that have the patina of truth, but often contain factual errors or outright fabrications. Even so, Microsoft, Google and other tech firms have begun to incorporate LLMs into their products, to help users create documents and perform other tasks.
?
其中一些產(chǎn)生了奇怪的結(jié)果。Bing Chat向一位記者建議他應(yīng)該離開他的妻子。 ChatGPT被一位法學(xué)教授指控誹謗。LLMs生成的答案具有真實的光澤,但通常包含錯誤或徹底的虛構(gòu)。即便如此,微軟、谷歌和其他科技公司已經(jīng)開始將LLMs納入其產(chǎn)品中,以幫助用戶創(chuàng)建文檔和執(zhí)行其他任務(wù)。
?
The recent acceleration in both the power and visibility of AI systems, and growing awareness of their abilities and defects, have raised fears that the technology is now advancing so quickly that it cannot be safely controlled. Hence the call for a pause, and growing concern that AI could threaten not just jobs, factual accuracy and reputations, but the existence of humanity itself.
?
AI系統(tǒng)的能力和可見性的近期進(jìn)展加速,以及對其能力和缺陷不斷增加的認(rèn)識,引發(fā)了人們的擔(dān)憂,認(rèn)為這項技術(shù)如今正在以如此快的速度前進(jìn),以至于無法安全地控制它。因此暫停對AI的開發(fā)被提起,人們越來越擔(dān)心AI不僅會威脅到就業(yè)、事實準(zhǔn)確性和聲譽,還會威脅到人類自身的存在。
?
Extinction? Rebellion?
滅絕?反叛?
?
The fear that machines will steal jobs is centuries old. But so far new technology has created new jobs to replace the ones it has destroyed. Machines tend to be able to perform some tasks, not others, increasing demand for people who can do the jobs machines cannot. Could this time be different? A sudden dislocation in job markets cannot be ruled out, even if so far there is no sign of one. Previous technology has tended to replace unskilled tasks, but LLMs can perform some white-collar tasks, such as summarising documents and writing code.
?
擔(dān)心機器會奪走工作的恐懼已經(jīng)存在了幾個世紀(jì)。但到目前為止,新技術(shù)已經(jīng)創(chuàng)造了新的工作來取代它摧毀的那些工作。機器傾向于能夠執(zhí)行某些任務(wù),而不能執(zhí)行其他任務(wù),這增加了需要那些能夠執(zhí)行機器無法勝任的工人需求。這一次會不會有所不同呢?雖然迄今為止沒有跡象表明會出現(xiàn)突然的就業(yè)市場失調(diào),但這種情況也不能排除。以往的技術(shù)往往替換的是無技能勞動,但是LMM可以執(zhí)行一些白領(lǐng)任務(wù),例如文檔摘要和編寫代碼。
?
The degree of existential risk posed by AI has been hotly debated. Experts are divided. In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. The nightmare is that an advanced AI causes harm on a massive scale, by making poisons or viruses, or persuading humans to commit terrorist acts. It need not have evil intent: researchers worry that future AIs may have goals that do not align with those of their human creators.
?
AI所帶來的生存風(fēng)險程度一直存在著激烈的爭議。專家們的意見不一。在2022年進(jìn)行的一項AI研究員調(diào)查中,48%的人認(rèn)為AI對人類的影響至少有10%的可能性是“極其糟糕的(例如,人類滅絕)”。但25%的人認(rèn)為風(fēng)險為0%,研究員認(rèn)為風(fēng)險的中位數(shù)為5%。噩夢是,一個先進(jìn)的AI會通過制造毒藥或病毒,或者說服人類進(jìn)行恐怖主義行為,從而造成大規(guī)模的危害。它不必具有邪惡的意圖:研究人員擔(dān)心未來的AI可能與人類創(chuàng)造者的目標(biāo)不一致。
?
Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology. And many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in future. Moreover, experts tend to overstate the risks in their area, compared with other forecasters. (And Mr Musk, who is launching his own AI startup, has an interest in his rivals downing tools.) Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
?
這些情景不應(yīng)被忽視。但是,所有這些情景都需要進(jìn)行大量的猜測,并且需要跨越當(dāng)前技術(shù)的巨大飛躍。許多人想象未來的AI將擁有不受限制的能源、金錢和計算能力,而這些如今的現(xiàn)實約束,可能會拒絕提供給失控的AI。此外,專家們往往與其他預(yù)測者相比夸大了自己領(lǐng)域的風(fēng)險。(馬斯克先生正在推出自己的AI初創(chuàng)公司,他有興趣讓自己的競爭對手下崗。) 現(xiàn)在強制實施嚴(yán)格的監(jiān)管或甚至?xí)和I開發(fā)似乎是過度反應(yīng)。何況,暫停行為也無法履行。
?
Regulation is needed, but for more mundane reasons than saving humanity. Existing AI systems raise real concerns about bias, privacy and intellectual-property rights. As the technology advances, other problems could become apparent. The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.
?
AI需要監(jiān)管,但其目的不是為了拯救人類,而是出于更為普通的原因。現(xiàn)有的AI系統(tǒng)引發(fā)了有關(guān)偏見、隱私和知識產(chǎn)權(quán)方面的真正問題。隨著技術(shù)的進(jìn)步,其他問題可能會變得明顯。問題的關(guān)鍵在于平衡AI的承諾與風(fēng)險評估,并準(zhǔn)備適應(yīng)變化。
?
So far governments are taking three different approaches. At one end of the spectrum is Britain, which has proposed a “l(fā)ight-touch” approach with no new rules or regulatory bodies, but applies existing regulations to AI systems. The aim is to boost investment and turn Britain into an “AI superpower”. America has taken a similar approach, though the Biden administration is now seeking public views on what a rulebook might look like.
?
至今,政府們采取了三種不同的方法。在一端是英國,提出了一種“輕觸式”方法,不制定新規(guī)則或設(shè)立監(jiān)管機構(gòu),但將現(xiàn)有規(guī)定應(yīng)用于人工智能系統(tǒng)。其目的是促進(jìn)投資,將英國打造成為“人工智能超級大國”。美國采取了類似的方法,盡管拜登政府現(xiàn)在正在征求公眾對規(guī)則書的看法。
?
The EU is taking a tougher line. Its proposed law categorises different uses of AI by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to selfdriving cars. Some uses of AI are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined. For some critics, these regulations are too stifling.
?
歐盟采取了更加嚴(yán)厲的立場。其提出的法律按照風(fēng)險程度將不同的人工智能用途分類,并要求隨著風(fēng)險程度從音樂推薦到自動駕駛汽車而增加,監(jiān)管和披露要求逐漸變得更加嚴(yán)格。有些人工智能的用途被禁止,例如潛意識廣告和遠(yuǎn)程生物特征識別。違反規(guī)定的公司將被罰款。對于一些批評者來說,這些規(guī)定過于壓抑創(chuàng)新。
?
But others say an even sterner approach is needed. Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release. China is doing some of this, requiring firms to register AI products and undergo a security review before release. But safety may be less of a motive than politics: a key requirement is that AIs’ output reflects the “core value of socialism”.
?
但是,還有人認(rèn)為需要更加嚴(yán)格的方法。政府應(yīng)該像管理藥品一樣管理人工智能,設(shè)置專門的監(jiān)管機構(gòu),在公開發(fā)行之前進(jìn)行嚴(yán)格的測試和預(yù)先批準(zhǔn)。中國正在做一些這方面的工作,要求企業(yè)在發(fā)布人工智能產(chǎn)品之前進(jìn)行注冊和安全審查。但安全相比于政治可能不是其主要動機,因為重要的要求是,人工智能的輸出內(nèi)容反映“社會主義核心價值觀”。
?
What to do? The light-touch approach is unlikely to be enough. If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is —then, like them, it will need new rules. Accordingly, the EU’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible. Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries.
?
怎么辦?輕觸式的方法不太可能足夠。如果人工智能像汽車、飛機和藥品一樣重要,而有充分的理由相信它確實是這樣——那么,像它們一樣,AI將需要新的規(guī)則。因此,歐盟的模式最接近實際情況,盡管它的分類系統(tǒng)有些繁瑣,采用基于原則的方法會更加靈活。強制公開系統(tǒng)的訓(xùn)練方式、運行方式和監(jiān)控方式,并要求進(jìn)行檢查,可以與其他行業(yè)的類似規(guī)則相媲美。
?
This could allow for tighter regulation over time, if needed. A dedicated regulator may then seem appropriate; so too may intergovernmental treaties, similar to those that govern nuclear weapons, should plausible evidence emerge of existential risk. To monitor that risk, governments could form a body modelled on CERN, a particle-physics laboratory, that could also study AI safety and ethics—areas where companies lack incentives to invest as much as society might wish.
?
如果需要的話,這將允許加強監(jiān)管。專門的監(jiān)管機構(gòu)可能是合適的選擇;類似于管理核武器的政府間條約,如果有存在風(fēng)險的合理證據(jù),也可以采取這種方法。為了監(jiān)測這種風(fēng)險,政府可以成立一個以歐洲核子研究中心為模型的機構(gòu),該機構(gòu)還可以研究人工智能安全和道德倫理,在這些領(lǐng)域,私人公司缺乏投資的動力。
?
This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in future. But the time to start building those foundations is now.
?
這種強大的技術(shù)帶來了新的風(fēng)險,同時也提供了非凡的機遇。平衡這兩者意味著要謹(jǐn)慎行事。今天采取適度的方法可以為未來增加更多規(guī)則提供基礎(chǔ)。但開始建立這些基礎(chǔ)的時機已經(jīng)到了。
?
長難句分析
1.???"These questions were asked last month in an open letter from the Future of Life Institute, an NGO."
這個句子主要包含兩個從句,分別是:“These questions were asked last month in an open letter”和“from the Future of Life Institute, an NGO”。第一個從句是主句,它的主語是“These questions”,謂語是“were asked”,表示這些問題在上個月被問了。第二個從句是一個介詞短語,用來說明主語所在的來源,其中“from”表示“來自于”,“the Future of Life Institute”是這個來源的名稱,而“an NGO”是對這個機構(gòu)的進(jìn)一步說明,意為“一個非政府組織”。
?
2.???"It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (AI), and was signed by tech luminaries including Elon Musk."
這個句子也包含兩個從句,分別是:“It called for a six-month ‘pause’ in the creation of the most advanced forms of artificial intelligence (AI)”和“and was signed by tech luminaries including Elon Musk”。第一個從句是主句,主語是“It”,謂語是“called for”,表示這個機構(gòu)呼吁停止六個月的最先進(jìn)人工智能的開發(fā)。第二個從句是一個并列的從句,使用了連詞“and”,用來說明這個呼吁的重要性,因為它被一些技術(shù)界的名人簽署了,其中包括埃隆·馬斯克。
?
3.???"These models stand to transform humans’ relationship with computers, knowledge and even with themselves."
這個句子主要包含一個主語和謂語,但是在主語中使用了不定式短語,這增加了整個句子的復(fù)雜度。主語是“These models”,謂語是“stand to transform”,表示這些模型有可能改變?nèi)伺c計算機、知識甚至自身的關(guān)系。不定式短語“to transform humans’ relationship with computers, knowledge and even with themselves”是用來說明主語的作用的,意為“改變?nèi)伺c計算機、知識甚至自身的關(guān)系”。
?
4.???"The first wave of modern AI systems, which emerged a decade ago, relied on carefully labelled training data."
這個句子包含兩個從句,分別是:“The first wave of modern AI systems relied on carefully labelled training data”和“which emerged a decade ago”。第一個從句是主句,主語是“The first wave of modern AI systems”,謂語是“relied on”,表示這些現(xiàn)代人工智能系統(tǒng)依賴于精心標(biāo)注的訓(xùn)練數(shù)據(jù)。第二個從句是一個定語從句,用來進(jìn)一步說明主語,其中“which”代表的是“the first wave of modern AI systems”,“emerged”是謂語,意為“出現(xiàn)”,“a decade ago”是時間狀語,表示“十年前”。
?
5.???"ChatGPT’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too."
這是一個復(fù)合句,其中“ChatGPT’s popularity”和“Microsoft’s move to incorporate it into Bing, its search engine”是并列的兩個獨立分句,它們都作為主語,謂語動詞是“prompted”,意為“促使”。主句的賓語是“rival firms to release chatbots too”,意為“競爭對手公司也開始發(fā)布聊天機器人”。
?
6.???Experts are divided, but all involve a huge amount of guesswork, and a leap from today’s technology, and many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in future.
這個句子有三個并列的分句,其中第一個分句的主語是“Experts”,謂語是“are divided”,意思是“專家們存在分歧”。第二個分句是一個復(fù)合句,其中“all”指的是前文提到的與AI相關(guān)的可怕情景,“involve”是謂語,表示這些情景都需要很多猜測和技術(shù)的飛躍。第三個分句的主語是“many”,謂語是“imagine”,表示“很多人想象未來的人工智能將會無限制地獲得能源、資金和計算能力,而這些都是現(xiàn)在存在的真正限制,未來的叛逆AI可能無法獲得這些資源”。
?
7.???The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.
這是一個復(fù)合句,其中主語是“The key”,謂語是“is”,意思是“關(guān)鍵在于”,后面是一個并列的復(fù)合謂語,“balance”和“be ready to adapt”,分別表示“平衡AI帶來的好處和風(fēng)險評估”以及“做好應(yīng)對準(zhǔn)備”。
?
8. "If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is —then, like them, it will need new rules."
這句話的主語是“AI”,謂語是“will need”,主干是“AI will need new rules”,它是一個復(fù)雜句,其中的條件狀語從句為“if AI is as important a technology as cars, planes and medicines”,意思是“如果AI與汽車、飛機和藥品一樣重要,那么它就需要新的規(guī)定”。整個句子的意思是如果AI是像汽車、飛機和藥品一樣重要的技術(shù),那么它就需要像這些技術(shù)一樣的新規(guī)定。
?
9.???"Accordingly, the EU’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible."
這是一個復(fù)雜句,其中的連詞是“though”,表示轉(zhuǎn)折關(guān)系。主語是“the EU’s model”,謂語是“is closest”,賓語是“to the mark”,意思是“最接近標(biāo)準(zhǔn)”。整個句子的意思是,因此,歐盟的模型最接近標(biāo)準(zhǔn),盡管它的分類系統(tǒng)有點復(fù)雜,但基于原則的方法會更加靈活。
?
10.?"Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries."
這是一個復(fù)雜句,其中的連詞是“and”,表示并列關(guān)系。主語是“Compelling disclosure”,謂語是“would be comparable”,意思是“將與其他行業(yè)的類似規(guī)定相媲美”。整個句子的意思是,強制披露有關(guān)系統(tǒng)的培訓(xùn)方式、運行方式和監(jiān)控方式,并要求進(jìn)行檢查,將與其他行業(yè)的類似規(guī)定相媲美。
?
11.?"To monitor that risk, governments could form a body modelled on CERN, a particle-physics laboratory, that could also study AI safety and ethics—areas where companies lack incentives to invest as much as society might wish."
這是一個復(fù)雜句,其中的連詞是“that”,表示定語從句。主語是“a body”,謂語是“could form”,意思是“可以組建一個機構(gòu)”。定語從句中,CERN是一個粒子物理學(xué)實驗室的比喻,說明這個機構(gòu)可以監(jiān)測AI的風(fēng)險,并研究AI的安全性和道德倫理問題,這些是企業(yè)缺乏投資動力的領(lǐng)域,而社會卻期望能夠得到更多的投資。整個句子的意思是,為了監(jiān)測這種風(fēng)險,政府可以組建一個類似于CERN的機構(gòu),這個機構(gòu)也可以研究AI的安全性和倫理道德等問題。