最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

Pause Giant AI Experiments: An Open Letter呼吁所有正在訓(xùn)練比 GPT-4 更強(qiáng)大的 A

2023-03-29 23:42 作者:魚C-小甲魚  | 我要投稿

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

今天,著名安全機(jī)構(gòu)生命未來研究所(Future of Life Institute,F(xiàn)LI)發(fā)布了一封公開信,信中呼吁:


全球所有機(jī)構(gòu)暫停訓(xùn)練比GPT-4更強(qiáng)大的AI至少六個(gè)月,并利用這六個(gè)月時(shí)間制定AI安全協(xié)議。


截至上面截圖時(shí)已有 1125 人在公開信上簽名附議,其中包括特斯拉 CEO 伊隆·馬斯克,圖靈獎(jiǎng)得主約書亞·本吉奧,以及蘋果聯(lián)合創(chuàng)始人史蒂夫·沃茲尼亞克。

(更多簽名大神名單見末尾)

簽名人數(shù)還在持續(xù)增加中,這封信的言辭十分激烈。

我們請?@魚C英語?小伙伴為我們翻譯下全文。


AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
和人類智能相近的 AI 系統(tǒng)會為社會和人類帶來極大風(fēng)險(xiǎn),廣泛的研究已經(jīng)證明了這一點(diǎn)[1],頂尖的AI實(shí)驗(yàn)室也已經(jīng)承認(rèn)[2]。正如 Asilomar AI 原則所指出的那樣,“高級AI可能意味著地球生命史上的深刻變革,我們應(yīng)當(dāng)投入可觀的關(guān)注和資源對其進(jìn)行規(guī)劃和管理”。不幸的是,這種規(guī)劃和管理沒人去做,而最近幾個(gè)月,AI實(shí)驗(yàn)室正在開發(fā)和部署越來越強(qiáng)大的數(shù)字思維,這種競爭逐漸失控——包括它們的創(chuàng)造者在內(nèi),沒有人能理解、預(yù)測或可靠地控制它們。


Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
如今,AI 系統(tǒng)在一般任務(wù)上已經(jīng)具備了與人類競爭的能力[3],我們必須自問:是否該讓信息渠道充斥著機(jī)器寫就的宣傳和謊言?是否應(yīng)該把所有工作都自動化,包括那些有成就感的工作?是否該開發(fā)機(jī)器大腦,讓它們比人腦還多,比人腦還聰明,最終淘汰我們?nèi)〈覀儯渴欠駪?yīng)該冒文明失控的風(fēng)險(xiǎn)?這樣的決定絕不能委托給未經(jīng)選舉的技術(shù)領(lǐng)袖來做。只有當(dāng)我們確信強(qiáng)大的 AI 系統(tǒng)是積極的、風(fēng)險(xiǎn)是可控的,才應(yīng)該繼續(xù)這種開發(fā)。而且AI的潛在影響越大,我們就越需要充分的理由來證明其可靠性。OpenAI 最近關(guān)于人工通用智能的聲明指出,“到了某個(gè)時(shí)間節(jié)點(diǎn),在開始訓(xùn)練新系統(tǒng)之前可能需要進(jìn)行獨(dú)立審查,而用于新模型的計(jì)算的增長速度也應(yīng)加以限制”。我們深深認(rèn)同這份聲明,而那個(gè)時(shí)間點(diǎn)就是現(xiàn)在。


Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.?This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
因此,我們呼吁所有正在訓(xùn)練比 GPT-4 更強(qiáng)大的 AI 系統(tǒng)的實(shí)驗(yàn)室立即暫停訓(xùn)練,至少暫停 6 個(gè)月。實(shí)驗(yàn)暫停應(yīng)該對外公開,可以驗(yàn)證,并涵蓋所有關(guān)鍵的實(shí)驗(yàn)室。如果不能迅速實(shí)施,政府應(yīng)該介入并發(fā)布禁止令。



AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
在暫停期間,AI 實(shí)驗(yàn)室和獨(dú)立學(xué)者應(yīng)針對高級AI的設(shè)計(jì)和開發(fā)共同制定實(shí)施一套共享安全協(xié)議。這份協(xié)議的審計(jì)和監(jiān)督應(yīng)由獨(dú)立的外部專家嚴(yán)格執(zhí)行,確保AI的安全性不超過合理的懷疑范圍[4]。這并不意味著我們要暫停人工智能的總體發(fā)展,而只是從目前這種會不斷涌現(xiàn)出具有新能力的、不可預(yù)測的黑匣子模型的危險(xiǎn)競賽中退后一步。


AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
人工智能的研究和開發(fā)應(yīng)該重新聚焦于優(yōu)化最先進(jìn)的系統(tǒng),讓它更加準(zhǔn)確、安全、可解釋、透明、穩(wěn)健、一致、值得信賴、對人類忠誠。


In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
與此同時(shí),AI 開發(fā)人員必須與決策者合作,大力推進(jìn)強(qiáng)有力的AI治理系統(tǒng)的發(fā)展。這個(gè)系統(tǒng)至少應(yīng)包括:針對AI的新型監(jiān)管機(jī)構(gòu);對高能力 AI 系統(tǒng)的監(jiān)督追蹤和大型算力池;幫助區(qū)分真實(shí)數(shù)據(jù)與 AI 生成的數(shù)據(jù)、并且追蹤模型泄漏的溯源和水印系統(tǒng);強(qiáng)有力的審計(jì)和認(rèn)證生態(tài)系統(tǒng);對 AI 造成的損害定責(zé);用于技術(shù) AI 安全研究的強(qiáng)大公共資金;應(yīng)對 AI 可能會引發(fā)的巨大經(jīng)濟(jì)和政治動蕩(尤其是對民主的影響)的有充足資源的機(jī)構(gòu)。


Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]??We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
人類可以和AI共創(chuàng)繁榮未來。我們已經(jīng)成功地創(chuàng)建了強(qiáng)大的 AI 系統(tǒng),現(xiàn)在我們可以享受“AI 之夏”,在這個(gè)過程中,收獲回報(bào),設(shè)計(jì)監(jiān)管系統(tǒng),造福所有人,并給社會一個(gè)適應(yīng)的機(jī)會。我們已經(jīng)暫停了其他可能產(chǎn)生災(zāi)難性影響的技術(shù),對于 AI 我們也應(yīng)如此。

部分簽名者如下:

Yoshua Bengio,蒙特利爾大學(xué),因?qū)ι疃葘W(xué)習(xí)的開發(fā)獲得圖靈獎(jiǎng),蒙特利爾學(xué)習(xí)算法研究所所長。

Stuart Russell,伯克利計(jì)算機(jī)科學(xué)教授,智能系統(tǒng)中心主任,標(biāo)準(zhǔn)教科書《人工智能:現(xiàn)代方法》的合著者。

伊隆·馬斯克(Elon Musk),SpaceX、Tesla和Twitter的首席執(zhí)行官。

史蒂夫·沃茲尼亞克(Steve Wozniak),蘋果聯(lián)合創(chuàng)始人。

Yuval Noah Harari,作家和耶路撒冷希伯來大學(xué)教授。

Emad Mostaque,Stability AI 首席執(zhí)行官。

Connor Leahy,Conjecture 首席執(zhí)行官。

Jaan Tallinn,Skype 聯(lián)合創(chuàng)始人,生存風(fēng)險(xiǎn)研究中心,未來生命研究所。

Evan Sharp,Pinterest 聯(lián)合創(chuàng)始人。

Chris Larsen,Ripple 聯(lián)合創(chuàng)始人

John J Hopfield,普林斯頓大學(xué)名譽(yù)教授,聯(lián)想神經(jīng)網(wǎng)絡(luò)的發(fā)明者。

Pause Giant AI Experiments: An Open Letter呼吁所有正在訓(xùn)練比 GPT-4 更強(qiáng)大的 A的評論 (共 條)

分享到微博請遵守國家法律
清丰县| 通州市| 宜章县| 芦山县| 原平市| 微山县| 黄冈市| 菏泽市| 江陵县| 中山市| 陕西省| 城步| 阿城市| 蕲春县| 剑河县| 沿河| 元阳县| 古田县| 宣武区| 鹤庆县| 旌德县| 丽水市| 阳西县| 娄底市| 南和县| 湾仔区| 厦门市| 河北区| 平度市| 泸州市| 潞城市| 盐津县| 兴山县| 葫芦岛市| 钟祥市| 始兴县| 凤阳县| 岑巩县| 开化县| 垫江县| 崇仁县|