《經(jīng)濟(jì)學(xué)人》雙語:人工智能是天使還是魔鬼?(Part 2)
原文標(biāo)題:
How to worry wisely about AI
Rapid progress in AI is arousing fear as well as excitement. How concerned should you be?
如何明智地看待人工智能
人工智能的迅猛發(fā)展既帶來了興奮,也引發(fā)了擔(dān)憂。你應(yīng)該有多擔(dān)憂呢?
Technology and society
科技與社會(huì)
[Paragraph 10]
The degree of existential risk posed by AI has been hotly debated. Experts are divided.
人工智能帶來的生存風(fēng)險(xiǎn)程度一直備受爭(zhēng)議。專家們意見不一。
In
a survey of AI researchers carried out in 2022, 48% thought there was
at least a 10% chance that AI’s impact would be “extremely bad (eg,
human extinction)”.
在 2022 年對(duì)人工智能研究人員進(jìn)行的一項(xiàng)調(diào)查中,48% 的人認(rèn)為人工智能至少有 10% 的可能性會(huì)產(chǎn)生“極壞影響(例如,人類滅絕)”。
But 25% said the risk was 0%; the median researcher put the risk at 5%.
但 25% 的人認(rèn)為風(fēng)險(xiǎn)是 0%;介于兩者之間的研究人員認(rèn)為風(fēng)險(xiǎn)是5%。
The
nightmare is that an advanced AI causes harm on a massive scale, by
making poisons or viruses, or persuading humans to commit terrorist
acts.
人類的噩夢(mèng)是,如果先進(jìn)的人工智能會(huì)制造毒藥或病毒,或說服人類實(shí)施恐怖行為,就會(huì)造成大規(guī)模傷害。
It need not have evil intent: researchers worry that future AIs may have goals that do not align with those of their human creators.
AI不一定有邪惡的意圖:但研究人員擔(dān)心未來人工智能的目標(biāo)可能與其人類創(chuàng)造者的目標(biāo)不一致。
[Paragraph 11]
Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology.
這種情況不應(yīng)該被忽視。但所有這些都涉及大量的猜測(cè),以及與今天相比的技術(shù)飛躍。
And many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in future.
許多人想象未來的人工智能將會(huì)無限制地獲取能源、金錢和算力,而這些在今天是真正的限制條件,在未來可能會(huì)拒絕將這些資源供給流氓人工智能。
Moreover, experts tend to overstate
the risks in their area, compared with other forecasters. (And Mr Musk,
who is launching his own AI startup, has an interest in his rivals
downing tools.)
此外,與其他預(yù)測(cè)者相比,專家們傾向于夸大他們自己領(lǐng)域的風(fēng)險(xiǎn)。(而馬斯克正在建立自己的人工智能新公司,他當(dāng)然有興趣讓對(duì)手放下工具)
Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
實(shí)施嚴(yán)格的監(jiān)管,或者說暫停開發(fā)AI,目前似乎是一種過度反應(yīng)。暫停開發(fā)也難以執(zhí)行。
[Paragraph 12]
Regulation is needed, but for more mundane reasons than saving humanity.
監(jiān)管是需要的,但其原因比拯救人類更平凡。
Existing AI systems raise real concerns about bias, privacy and intellectual-property rights.
現(xiàn)有的人工智能系統(tǒng)引起了人們對(duì)偏見、隱私和知識(shí)產(chǎn)權(quán)的真正擔(dān)憂。
As the technology advances, other problems could become apparent.
隨著技術(shù)的革新發(fā)展,其他問題可能會(huì)變得明顯。
The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.
關(guān)鍵是要平衡人工智能的前景和風(fēng)險(xiǎn)評(píng)估,并做好適應(yīng)的準(zhǔn)備。?
[Paragraph 13]
So far governments are taking three different approaches.
到目前為止,各國政府正采取3種不同的策略。
At one end of the spectrum is Britain, which has proposed a “light-touch” approach with no new rules or regulatory bodies, but applies existing regulations to AI systems.
英國處于一個(gè)極端,它提出了一種“溫和干預(yù)”的方法,即沒有新規(guī)則或監(jiān)管機(jī)構(gòu),但會(huì)將現(xiàn)有法規(guī)應(yīng)用于人工智能系統(tǒng)。
The aim is to boost investment and turn Britain into an “AI superpower”.
目的是促進(jìn)投資,將英國變成一個(gè)“人工智能超級(jí)大國”。
America
has taken a similar approach, though the Biden administration is now
seeking public views on what a rulebook might look like.
美國也采取了類似的做法,不過拜登政府現(xiàn)在正在征求公眾對(duì)人工智能規(guī)則的意見。
[Paragraph 14]
The EU is taking a tougher line.
歐盟正在采取更強(qiáng)硬的策略。
Its proposed law categorises different uses of AI by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to self-driving cars.
其擬議的法律根據(jù)風(fēng)險(xiǎn)程度對(duì) AI 的不同用途進(jìn)行分類,并且隨著風(fēng)險(xiǎn)程度的增加(例如從音樂推薦到自動(dòng)駕駛汽車),進(jìn)行更嚴(yán)格的監(jiān)控和披露。
Some uses of AI are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined.
人工智能的某些用途被完全禁止,例如潛意識(shí)廣告和遠(yuǎn)程生物識(shí)別。違反規(guī)定的公司將面臨罰款。
For some critics, these regulations are too stifling.
批評(píng)者認(rèn)為這些規(guī)定太令人窒息了。
[Paragraph 15]
But others say an even sterner approach is needed.
但其他人認(rèn)為需要采取更嚴(yán)厲的措施。
Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release.
政府應(yīng)該像對(duì)待藥品一樣對(duì)待人工智能,在公開發(fā)布之前有專門的監(jiān)管機(jī)構(gòu)對(duì)其進(jìn)行嚴(yán)格的測(cè)試和預(yù)先批準(zhǔn)。
China is doing some of this, requiring firms to register AI products and undergo a security review before release.
中國正在做這方面的工作,要求企業(yè)注冊(cè)人工智能產(chǎn)品,并在發(fā)布前接受安全審查。
[Paragraph 16]
What to do? The light-touch approach is unlikely to be enough.
怎么辦?溫和干預(yù)可能不夠。
If
AI is as important a technology as cars, planes and medicines—and there
is good reason to believe that it is—then, like them, it will need new
rules.
如果人工智能與汽車、飛機(jī)和藥品一樣是一項(xiàng)重要的技術(shù)——有充分的理由相信它是如此——那么與它們一樣,人工智能也需要新的規(guī)則。
Accordingly, the EU’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible.
因此,歐盟模式最接近目標(biāo),盡管它的分類系統(tǒng)過于嚴(yán)格,但基于原則的方法可以更加靈活。
Compelling
disclosure about how systems are trained, how they operate and how they
are monitored, and requiring inspections, would be comparable to
similar rules in other industries.
強(qiáng)制披露系統(tǒng)的培訓(xùn)方式、運(yùn)行方式和監(jiān)控方式,以及要求進(jìn)行檢查,這將與其他行業(yè)的類似規(guī)則相媲美。
[Paragraph 17]
This could allow for tighter regulation over time, if needed.
如果需要,隨著時(shí)間的推移,這可以允許更嚴(yán)格的監(jiān)管。
A
dedicated regulator may then seem appropriate; so too may
intergovernmental treaties, similar to those that govern nuclear
weapons, should plausible evidence emerge of existential risk.
然后,一個(gè)專門的監(jiān)管機(jī)構(gòu)似乎也是必要的;如果有可信的證據(jù)表明風(fēng)險(xiǎn)存在,政府間管理人工智能條約也可能類似于管理核武器的條約。
To
monitor that risk, governments could form a body modelled on CERN, a
particle-physics laboratory, that could also study AI safety and
ethics—areas where companies lack incentives to invest as much as
society might wish.
為了監(jiān)控這種風(fēng)險(xiǎn),政府可以組建一個(gè)類似于粒子物理實(shí)驗(yàn)室CERN的機(jī)構(gòu),這種機(jī)構(gòu)還可以研究 AI 安全和倫理——在這些領(lǐng)域,公司沒有動(dòng)力按照社會(huì)的意愿進(jìn)行投資。
[Paragraph 18]
This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully.
這項(xiàng)強(qiáng)大的技術(shù)帶來了新的風(fēng)險(xiǎn),但也帶來了非凡的機(jī)遇。平衡兩者意味著要謹(jǐn)慎行事。
A measured approach today can provide the foundations on which further rules can be added in future.
今天采取審慎的策略可以為未來增加更多規(guī)則提供基礎(chǔ)。
But the time to start building those foundations is now.
但現(xiàn)在開始建立這些基礎(chǔ)的時(shí)候到了。
(恭喜讀完,本篇英語詞匯量697/1406左右)
原文出自:2023年4月22日《The Economist》Leaders版塊。
精讀筆記來源于:自由英語之路
本文翻譯整理: Irene
本文編輯校對(duì): Irene
僅供個(gè)人英語學(xué)習(xí)交流使用。

【補(bǔ)充資料】(來自于網(wǎng)絡(luò))
溫和干預(yù)Light-touch Approach又稱“輕度監(jiān)管,寬松政策”,通常是指對(duì)某個(gè)問題或領(lǐng)域的監(jiān)管或干預(yù)采取較為溫和、靈活的方式,以盡可能減少對(duì)其自主性和自由度的限制。這種方法通常強(qiáng)調(diào)依靠市場(chǎng)力量和自我調(diào)節(jié)能力,與過度干預(yù)相比,其干預(yù)力度和程度更小。
歐洲粒子物理學(xué)研究中心CERN(Conseil
Européen pour la Recherche
Nucléaire),是世界上最大的基礎(chǔ)科學(xué)研究機(jī)構(gòu)之一。其總部位于瑞士日內(nèi)瓦附近,擁有22個(gè)成員國。CERN致力于研究基本粒子的物理學(xué),包括了使用加速器加速帶電粒子的能量以及開發(fā)探測(cè)器來觀察反應(yīng)產(chǎn)生的現(xiàn)象等方面。CERN曾經(jīng)發(fā)現(xiàn)了許多重要的粒子,如W和Z玻色子、夸克、膠子等,并成功地發(fā)現(xiàn)了已知物質(zhì)的基本構(gòu)成。此外,CERN還是Web技術(shù)的誕生地,它發(fā)明了萬維網(wǎng)技術(shù),為現(xiàn)代信息通訊技術(shù)做出了巨大的貢獻(xiàn)。
【重點(diǎn)句子】(3個(gè))
Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
實(shí)施嚴(yán)格的監(jiān)管,或者說暫停開發(fā)AI,目前似乎是一種過度反應(yīng)。暫停開發(fā)也難以執(zhí)行。
Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release.
政府應(yīng)該像對(duì)待藥品一樣對(duì)待人工智能,在公開發(fā)布之前有專門的監(jiān)管機(jī)構(gòu)對(duì)其進(jìn)行嚴(yán)格的測(cè)試和預(yù)先批準(zhǔn)。
This
powerful technology poses new risks, but also offers extraordinary
opportunities. Balancing the two means treading carefully.
這項(xiàng)強(qiáng)大的技術(shù)帶來了新的風(fēng)險(xiǎn),但也帶來了非凡的機(jī)遇。平衡兩者意味著要謹(jǐn)慎行事。
