最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

人類公開信|暫停大型人工智能實(shí)驗(yàn)6個(gè)月

2023-03-29 16:04 作者:產(chǎn)品君  | 我要投稿

北京時(shí)間3月29日上午9:00,1000多位科學(xué)家發(fā)表聯(lián)名公開信,呼吁暫停GPT-4及以上AI訓(xùn)練,至少6個(gè)月。正式打響人類反抗AI第一槍。

簽名大佬有馬斯克、Gary Marcus、Y. Bengio、S. Russell、Max Tegmark、V. Kraknova、P. Maes、Grady Booch、Andrew Yang、Tristan Harris、Tristan Harris,共計(jì)1125名科學(xué)家和企業(yè)家

如果不是利益驅(qū)動,一定是他們真的發(fā)現(xiàn)AI不得了的東西

以下是公開信GPT翻譯版:

一封公開信:暫停巨型人工智能實(shí)驗(yàn)

我們呼吁所有人工智能實(shí)驗(yàn)室立即暫停培訓(xùn)超過GPT-4水平的人工智能系統(tǒng)至少6個(gè)月

具有人類競爭力的AI系統(tǒng)可能對社會和人類造成深刻的風(fēng)險(xiǎn),這已經(jīng)得到廣泛研究[1]并被頂尖AI實(shí)驗(yàn)室所認(rèn)可[2]。正如Asilomar AI原則所指出的那樣,先進(jìn)的AI可能代表著地球生命史上深刻的變革,并且應(yīng)該計(jì)劃和管理好相應(yīng)的關(guān)注和資源。不幸的是,即使最近幾個(gè)月來看到了AI實(shí)驗(yàn)室在無法控制地競賽開發(fā)和部署越來越強(qiáng)大、甚至連他們自己也無法理解、預(yù)測或可靠控制數(shù)字思維體系時(shí),這種規(guī)劃和管理水平仍未發(fā)生。

當(dāng)今智能系統(tǒng)正在變得具有一般任務(wù)方面與人類競爭力[3] ,我們必須問自己:我們是否應(yīng)該讓機(jī)器用宣傳和虛假信息充斥我們的信息渠道?我們是否應(yīng)該將所有工作都自動化處理,包括那些富有成效性質(zhì)?我們是否應(yīng)該開發(fā)非人類思維體系以至于最終數(shù)量超過、比智商更高、過時(shí)并取代了我們? 我們是否冒著失去文明控制權(quán)之風(fēng)險(xiǎn)?此類決策不應(yīng)該委托給未經(jīng)選舉的技術(shù)領(lǐng)袖。只有在我們確信其效果將是積極的、風(fēng)險(xiǎn)可控時(shí),才應(yīng)該開發(fā)強(qiáng)大的AI系統(tǒng)。這種信心必須得到充分證明,并隨著系統(tǒng)潛在影響的重要性而增加。OpenAI最近關(guān)于人工通用智能的聲明指出:“在某些時(shí)候,在開始訓(xùn)練未來系統(tǒng)之前獲得獨(dú)立審查可能很重要,對于最先進(jìn)的努力來說,同意限制用于創(chuàng)建新模型所使用計(jì)算機(jī)速度也很重要。” 我們贊成這一觀點(diǎn)?,F(xiàn)在就是那個(gè)時(shí)間。

因此,我們呼吁所有AI實(shí)驗(yàn)室立即暫停至少6個(gè)月比GPT-4更強(qiáng)大的AI系統(tǒng)培訓(xùn)。這種暫停應(yīng)該是公開和可驗(yàn)證的,并包括所有關(guān)鍵參與者。如果無法迅速實(shí)施這樣一個(gè)暫停,則政府應(yīng)介入并實(shí)施禁令。

AI實(shí)驗(yàn)室和獨(dú)立專家應(yīng)利用此次暫停共同開發(fā)和執(zhí)行一套共享安全協(xié)議,以進(jìn)行高級別AI設(shè)計(jì)和開發(fā),并由獨(dú)立外部專家進(jìn)行嚴(yán)格審核和監(jiān)督管理。這些協(xié)議應(yīng)確保遵守它們的系統(tǒng)是安全的,超出合理懷疑范圍[4]。這并不意味著對AI發(fā)展總體暫停,而僅僅是從危險(xiǎn)的競賽中退后一步,避免使用具有新穎能力、難以預(yù)測的黑盒模型。

人工智能研究和開發(fā)應(yīng)重新聚焦于使當(dāng)今強(qiáng)大的、最先進(jìn)的系統(tǒng)更加準(zhǔn)確、安全、可解釋、透明、穩(wěn)健、對齊,值得信賴和忠誠。

同時(shí),人工智能開發(fā)者必須與政策制定者合作,大力推動健全人工智能治理體系的發(fā)展。這些至少應(yīng)包括:專門從事人工智能監(jiān)管的新型有權(quán)機(jī)構(gòu);對高度復(fù)雜的人工智能系統(tǒng)和大量計(jì)算資源進(jìn)行監(jiān)督和跟蹤;起源追溯和水印系統(tǒng)以幫助區(qū)分真實(shí)與合成,并跟蹤模型泄漏;一個(gè)強(qiáng)有力的審計(jì)和認(rèn)證生態(tài)系統(tǒng);因人工智能造成損害而承擔(dān)責(zé)任;為技術(shù)性人工智能安全研究提供充足公共資金支持;以及為應(yīng)對人工智能將引起的巨大經(jīng)濟(jì)和政治顛覆(特別是民主)而設(shè)立充足資源機(jī)構(gòu)。

在AI方面,我們可以享受一個(gè)繁榮昌盛的未來。成功地創(chuàng)建了強(qiáng)大 AI 系統(tǒng)后,我們現(xiàn)在可以享受“AI 夏季”,收獲回報(bào),并將這些系統(tǒng)設(shè)計(jì)為所有人都受益的明顯好處,并給社會一個(gè)適應(yīng)的機(jī)會。社會已經(jīng)暫停了其他可能對社會產(chǎn)生災(zāi)難性影響的技術(shù)。我們可以在這里做到這一點(diǎn)。讓我們享受漫長的 AI 夏季,不要匆忙地進(jìn)入秋天而毫無準(zhǔn)備

以下是公開信原文:

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]?and acknowledged by top AI labs.[2]?As stated in the widely-endorsed?Asilomar AI Principles,?Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3]?and we must ask ourselves:?Should?we let machines flood our information channels with propaganda and untruth??Should?we automate away all the jobs, including the fulfilling ones??Should?we develop nonhuman minds that might eventually outnumber, outsmart,?obsolete and replace?us??Should?we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.?Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.?This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's?recent statement regarding artificial general intelligence, states that?"At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."?We agree. That point is now.

Therefore,?we?call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4]?This does?not?mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]??We can do so here.?Let's enjoy a long AI summer, not rush unprepared into a fall.

部分簽名大佬:

Yoshua Bengio,蒙特利爾大學(xué)教授,圖靈獎(jiǎng)得主,深度學(xué)習(xí)開發(fā)者,蒙特利爾機(jī)器學(xué)習(xí)算法研究所負(fù)責(zé)人。

Stuart Russell,伯克利大學(xué)計(jì)算機(jī)科學(xué)教授、智能系統(tǒng)中心主任和標(biāo)準(zhǔn)教材《人工智能:現(xiàn)代方法》的合著者。

Elon Musk,SpaceX、Tesla 和 Twitter 的首席執(zhí)行官。

Steve Wozniak,蘋果公司聯(lián)合創(chuàng)始人。

Yuval Noah Harari, 希伯來大學(xué)作家和教授.

Andrew Yang, Forward Party 共同主席、2020 年總統(tǒng)候選人、紐約時(shí)報(bào)暢銷書作者和全球企業(yè)家總統(tǒng)大使.

Connor Leahy, Conjecture 首席執(zhí)行官.

Jaan Tallinn, Skype 聯(lián)合創(chuàng)始人、存在風(fēng)險(xiǎn)研究中心和未來生命研究所共同創(chuàng)辦人.

Evan Sharp, Pinterest 聯(lián)合創(chuàng)始人.

Chris Larsen, Ripple 聯(lián)合創(chuàng)始人.

Emad Mostaque, Stability AI 首席執(zhí)行官.

Valerie Pisano,MILA 總裁兼首席執(zhí)行官

John J Hopfield ,普林斯頓大學(xué)名譽(yù)退休教授 ,關(guān)聯(lián)神經(jīng)網(wǎng)絡(luò)發(fā)明者

Rachel Bronson,Bulletin of the Atomic Scientists 總裁

Max Tegmark, MIT 人工智能和基本相互作用中心教授,未來生命研究所主席.

Anthony Aguirre, 加州大學(xué)圣克魯茲分校物理學(xué)教授,未來生命研究所執(zhí)行董事.

Victoria Krakovna, DeepMind 研究科學(xué)家、未來生命研究所聯(lián)合創(chuàng)始人.

Emilia Javorsky, 醫(yī)師-科學(xué)家&總監(jiān),F(xiàn)uture of Life Institute

Sean O'Heigeartaigh,Cambridge 存在風(fēng)險(xiǎn)研究中心執(zhí)行董事

Tristan Harris, Center for Humane Technology 執(zhí)行董事

Marc Rotenberg,AI 和數(shù)字政策中心總裁

Nico Miailhe, The Future Society (TFS) 創(chuàng)始人和總裁

Zachary Kenton ,DeepMind 高級研究科學(xué)家

Ramana Kumar ,DeepMind 研究科學(xué)家

Gary Marcus ,紐約大學(xué) AI 研究員、名譽(yù)退休教授

Steve Omohundro,Beneficial AI Research 首席執(zhí)行官

人類公開信|暫停大型人工智能實(shí)驗(yàn)6個(gè)月的評論 (共 條)

分享到微博請遵守國家法律
抚顺县| 太湖县| 张家口市| 石屏县| 新宁县| 攀枝花市| 柘城县| 东方市| 合江县| 遵义市| 墨玉县| 西乡县| 太保市| 报价| 苍山县| 东港市| 巴南区| 明光市| 乌兰察布市| 灌南县| 信宜市| 嘉荫县| 晴隆县| 博野县| 洪洞县| 佳木斯市| 文登市| 禄丰县| 高安市| 罗山县| 蕲春县| 赞皇县| 京山县| 行唐县| 尚义县| 基隆市| 绥中县| 四川省| 黄浦区| 密云县| 老河口市|