最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

Daily Translation #11

2023-09-19 21:26 作者:Glaaaacier  | 我要投稿

我們可以像阻止核災(zāi)難一樣預(yù)防AI災(zāi)難

1945716日,這一天永遠(yuǎn)地改變了世界。由羅伯特·奧本海默領(lǐng)導(dǎo)的曼哈頓計(jì)劃三位一體實(shí)驗(yàn),使得人類首次具有了自我毀滅的能力:在新墨西哥州的洛斯阿拉莫斯往南210英里處,一顆原子彈被成功引爆。

194586日,美軍向廣島投放了原子彈,三天后又向長崎投放了一顆原子彈。這兩顆原子彈釋放出了前所未有的破壞力,并為第二次世界大戰(zhàn)帶來了脆弱的和平,同樣也讓世界籠罩在這種新的危險(xiǎn)之下。

盡管核技術(shù)的應(yīng)用為我們帶來了大量的能源,但它也預(yù)示著人類文明的一種未來,即在核戰(zhàn)爭中毀滅。核技術(shù)的爆炸式發(fā)展已經(jīng)擴(kuò)大至全球范圍。通過國際合作來管控核技術(shù)進(jìn)而避免全球性災(zāi)難的趨勢愈發(fā)明顯。如果想要建立一個(gè)強(qiáng)有力的機(jī)構(gòu)管控核技術(shù),那么時(shí)間就是關(guān)鍵。

1952年,歐洲11個(gè)國家組建了歐洲核子研究委員會(CERN),目的是進(jìn)行純基礎(chǔ)性質(zhì)的科學(xué)核研究合作,表明該組織的研究是為了造福大眾。國際原子能機(jī)構(gòu)(IAEA)于1957年成立,旨在監(jiān)測全球鈾儲備和限制核擴(kuò)散。還有其他的一些組織,它們幫助我們平安度過了過去70年。

我們相信現(xiàn)在,人類正又一次面臨著技術(shù)的爆炸式發(fā)展:先進(jìn)人工智能的發(fā)展。這項(xiàng)強(qiáng)勁的技術(shù)如果不加限制,一定會給人類帶來毀滅。但如果對其進(jìn)行合理的安全管控,它也能夠創(chuàng)造一個(gè)美好的未來。

?

對人工通用智能的恐懼

專家們一直在警告人工通用智能(AGI)的發(fā)展。包括OpenAICEO山姆·阿爾特曼和谷歌DeepMindCEO戴密斯·哈薩比斯在內(nèi),許多杰出的人工智能科學(xué)家和主要人工智能公司的領(lǐng)袖都簽署了人工智能安全中心的一份聲明:減輕人工智能帶來的人類滅絕風(fēng)險(xiǎn),應(yīng)當(dāng)像其他社會性風(fēng)險(xiǎn)(比如大規(guī)模流行病和核戰(zhàn)爭)一樣,成為全球優(yōu)先事項(xiàng)。幾個(gè)月前,另一份呼吁停止大規(guī)模人工智能實(shí)驗(yàn)的倡導(dǎo)書獲得了超2.7萬人的簽名,其中包括圖靈獎(jiǎng)獲得者約書亞·本吉奧和杰弗里·欣頓。

這是因?yàn)橐粋€(gè)由人工智能公司(包括OpenAI,谷歌DeepMind,Anthropic)組成的小組正致力于研發(fā)AGI:它不僅像ChatGPT一樣是個(gè)聊天機(jī)器人,還是“自主的,并且在大多數(shù)經(jīng)濟(jì)活動中表現(xiàn)勝于人類”的AI。投資者,現(xiàn)任英國基金會模型工作組主席的伊恩·霍加斯,稱這些AI“如神一般”并且懇求政府減緩AI的研發(fā)進(jìn)度。即使是那些人工智能的開發(fā)者也從中預(yù)見到了危險(xiǎn)。OpenAICEO山姆·阿爾特曼說“超人機(jī)械智能(SMI)的發(fā)展可能是人類存續(xù)所面臨的最大挑戰(zhàn)?!?/span>

世界領(lǐng)袖們正呼吁建立一個(gè)國際性機(jī)構(gòu)來應(yīng)對AGI帶來的威脅:一個(gè)管控AI的“CERN”或者“IAEA”。在6月,美國總統(tǒng)拜登和英國首相蘇納克就相關(guān)事宜進(jìn)行了討論。聯(lián)合國秘書長安東尼奧·古特雷斯也認(rèn)為我們需要這樣的機(jī)構(gòu)。鑒于通過國際合作應(yīng)對人工智能風(fēng)險(xiǎn)的共識日益增強(qiáng),我們應(yīng)當(dāng)切實(shí)地思考一下這樣的機(jī)構(gòu)該如何建立。

?

MAGIC會成為AICERN嗎?

MAGIC(多邊AGI聯(lián)盟)會成為全球唯一一個(gè)專注于安全研究和發(fā)展先進(jìn)人工智能的高級安全AI機(jī)構(gòu)。像CERN一樣,MAGIC會把AGI的開發(fā)從企業(yè)手中移交給致力于AI安全發(fā)展的國際組織。

在高風(fēng)險(xiǎn)研究和先進(jìn)技術(shù)開發(fā)中,MAGIC將有獨(dú)占權(quán)。其他實(shí)體獨(dú)自進(jìn)行AGI開發(fā)將被視為非法。這并不會對絕大多數(shù)的AI研究和發(fā)展造成影響,只會影響AGI相關(guān)的前沿技術(shù),就像我們在其他科技領(lǐng)域中對危險(xiǎn)的研發(fā)做的那樣。對致命病原體的研究設(shè)計(jì)是絕對禁止的,或者僅限于高級別生物安全實(shí)驗(yàn)室。同時(shí),大多數(shù)的藥品研究會在監(jiān)管機(jī)構(gòu)(比如美國食品藥品監(jiān)督管理局(FDA))的監(jiān)督下進(jìn)行。

MAGIC只會關(guān)注并阻止前沿AI系統(tǒng)(如神AI)發(fā)展歲帶來的高風(fēng)險(xiǎn)。只有經(jīng)過安全證明,MAGIC才會將研究突破向全球共享。

為了確保高風(fēng)險(xiǎn)AI研究保持安全,并在MAGIC的嚴(yán)格監(jiān)管下,全球停止創(chuàng)造使用超過一定計(jì)算能力的人工智能(關(guān)于為什么計(jì)算能力如此重要,這里有一個(gè)很好的概述https://time.com/6300942/ai-progress-charts/)。這與國際上處理鈾的方式類似。鈾是核武器與核能中使用的主要資源。

失去了競爭壓力,MAGIC能夠確保為這項(xiàng)變革性技術(shù)提供充分的安全與保障,并且使所有簽署國都能夠獲益。MAGIC會像CERN一樣獲得成功。

美國和英國正大力推動這一多邊合作,并在今年11月即將舉行的全球人工智能峰會后促成其建立。

防范AGI帶來的生存風(fēng)險(xiǎn)是一項(xiàng)艱巨的任務(wù),而把這一任務(wù)留給私營企業(yè)則是一個(gè)更加危險(xiǎn)的舉動。我們不會讓個(gè)體或企業(yè)為私人用途而研發(fā)核武器,我們也不會允許同樣的事情發(fā)生在危險(xiǎn)強(qiáng)大的人工智能身上。我們成功地避免了核災(zāi)難,我們也能再一次保衛(wèi)我們的未來,除非我們碌碌無為。我們必須把先進(jìn)AI研發(fā)交到一個(gè)全球性、可信賴的機(jī)構(gòu)手中,這樣才能為每個(gè)人創(chuàng)造一個(gè)安全的未來。

二戰(zhàn)后的機(jī)構(gòu)通過管控核技術(shù)發(fā)展使我們免受核戰(zhàn)爭的摧殘?,F(xiàn)在,人類正面臨著新的全球性威脅——失控的人工通用智能——我們必須再一次采取措施保衛(wèi)我們的未來。

?

譯者注:說實(shí)話,這篇文章極具煽動性和感染力,但其目的很明顯。明面上渲染AI威脅,實(shí)則搞技術(shù)壟斷。而且文章通過多方面強(qiáng)調(diào)AI威脅,但通篇沒有給出一個(gè)例子。拿AI和核彈作類比,核彈的破壞力是顯而易見的(當(dāng)然,往霓虹投核彈時(shí)我沒在場,建議再投幾顆讓大家開開眼),AI的危險(xiǎn)性全是那些科技巨頭說出來的,都是些側(cè)面描寫(比如說原文中的“godlike AIs”),哪怕你講個(gè)天網(wǎng)我也信了。所以單就這篇文章來說,其說服力是不夠的。

但退一萬步講,我寧愿讓AI毀滅人類,也不愿將它送入資本家手中。


Original Article:

We Can Prevent AI Disaster Like We Prevented Nuclear Catastrophe

On 16th July 1945 the world changed forever. The Manhattan Project’s ‘Trinity’ test, directed by Robert Oppenheimer, endowed humanity for the first time with the ability to wipe itself out: an atomic bomb had been successfully detonated 210 miles south of Los Alamos, New Mexico.

On 6th August 1945 the bomb was dropped on Hiroshima and three days later, Nagasaki— unleashing unprecedented destructive power. The end of World War II brought a fragile peace, overshadowed by this new, existential threat.

While nuclear technology promised an era of abundant energy, it also launched us into a future where nuclear war could lead to the end of our civilization. The ‘blast radius’ of our technology had increased to a global scale. It was becoming increasingly clear that governing nuclear technology to avoid a global catastrophe required international cooperation. Time was of the essence to set up robust institutions to deal with this.

In 1952, 11 countries set up CERN and tasked it with “collaboration in scientific [nuclear] research of a purely fundamental nature”—making clear that CERN’s research would be used for the public good. The International Atomic Energy Agency (IAEA) was also set up in 1957 to monitor global stockpiles of uranium and limit proliferation. Among others, these institutions helped us to survive over the last 70 years.

We believe that humanity is facing once more an increase in the ‘blast radius’ of technology: the development of advanced artificial intelligence. A powerful technology that could annihilate humanity if left unrestrained, but, if harnessed safely, could change the world for the better.

?

The specter of artificial general intelligence

Experts have been sounding the alarm on artificial general intelligence (AGI) development. Distinguished AI scientists and leaders of the major AI companies, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, signed a statement from the Center for AI Safety that reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” A few months earlier, another letter calling for a pause in giant AI experiments was signed over 27,000 times, including by Turing Prize winners Yoshua Bengio and Geoffrey Hinton.

This is because a small group of AI companies (OpenAI, Google Deepmind, Anthropic) are aiming to create AGI: not just chatbots like ChatGPT, but AIs that are “autonomous and outperform humans at most economic activities”. Ian Hogarth, investor and now Chair of the UK’s Foundation Model Taskforce, calls these ‘godlike AIs’ and implored governments to slow down the race to build them. Even the developers of the technology themselves expect great danger from it. Altman, CEO of the company behind ChatGPT, said that the “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”

World leaders are calling for the establishment of an international institution to deal with the threat of AGI: a ‘CERN’ or ‘IAEA for AI’. In June, President Biden and U.K. Prime Minister Sunak discussed such an organization. The U.N. Secretary-General, Antonio Guterres thinks we need one, too. Given this growing consensus for international cooperation to respond to the risks from AI, we need to lay out concretely how such an institution might be built.

?

Would a ‘CERN for AI’ look like MAGIC?

MAGIC (the Multilateral AGI Consortium) would be the world’s only advanced and secure AI facility focused on safety-first research and development of advanced AI. Like CERN, MAGIC will allow humanity to take AGI development out of the hands of private firms and lay it into the hands of an international organization mandated towards safe AI development.

MAGIC would have exclusivity when it comes to the high-risk research and development of advanced AI. It would be illegal for other entities to independently pursue AGI development. This would not affect the vast majority of AI research and development, and only focus on frontier, AGI-relevant research, similar to how we already deal with dangerous R&D with other technologies. Research on engineering lethal pathogens is outright banned or confined to very high biosafety level labs. At the same time, the vast majority of drug research is instead supervised by regulatory agencies like the FDA.

MAGIC will only be concerned with preventing the high-risk development of frontier AI systems - godlike AIs. Research breakthroughs done at MAGIC will only be shared with the outside world once proven demonstrably safe.

To make sure high risk AI research remains secure and under strict oversight at MAGIC, a global moratorium on creation of AIs using more than a set amount of computing power be put in place (here’s a great overview of why computing power matters). This is similar to how we already deal with uranium internationally, the main resource used for nuclear weapons and energy.

Without competitive pressures, MAGIC can ensure the adequate safety and security needed for this transformative technology, and distribute the benefits to all signatories. CERN exists as a precedent for how we can succeed with MAGIC.

The U.S. and the U.K. are in a perfect position to facilitate this multilateral effort, and springboard its inception after the upcoming Global Summit on Artificial Intelligence in November this year.

Averting existential risk from AGI is daunting, and leaving this challenge to private companies is a very dangerous gamble. We don’t let individuals or corporations develop nuclear weapons for private use, and we shouldn’t allow this to happen with dangerous, powerful AI. We managed to not destroy ourselves with nuclear weapons, and we can secure our future again - but not if we remain idle. We must place advanced AI development into the hands of a new global, trusted institution and create a safer future for everyone.

Post-WWII institutions helped us avoid nuclear war by controlling nuclear development. As humanity faces a new global threat—uncontrolled artificial general intelligence (AGI)—we once again need to take action to secure our future.

?

原網(wǎng)址:

https://time.com/6314045/prevent-ai-disaster-nuclear-catastrophe/




Daily Translation #11的評論 (共 條)

分享到微博請遵守國家法律
尖扎县| 肥东县| 沽源县| 上蔡县| 麻阳| 临洮县| 德化县| 定州市| 金湖县| 水富县| 彭州市| 搜索| 麻江县| 岳西县| 赫章县| 香港 | 开鲁县| 辽阳县| 牟定县| 新源县| 日喀则市| 太保市| 新闻| 浮山县| 邢台市| 赣榆县| 阿拉善右旗| 蒲江县| 秦皇岛市| 若尔盖县| 沙坪坝区| 徐闻县| 肇源县| 外汇| 湟中县| 江都市| 安化县| 东辽县| 新竹县| 融水| 嘉定区|