最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

【翻譯】停止大型AI實(shí)驗(yàn):一封公開信

2023-04-07 14:44 作者:持盾水母  | 我要投稿

????AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research?and acknowledged by top AI labs.?As stated in the widely-endorsed?Asilomar AI Principles,?Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

????大量來自頂級AI實(shí)驗(yàn)室的研究證明:與人類具有競爭性的AI系統(tǒng)會對影響人類社會造成負(fù)面影響。根據(jù)被廣泛認(rèn)可的“Asilomar AI 原則”開創(chuàng)性的AI能夠?qū)Φ厍蛏臍v史造成深刻地改變,并且其應(yīng)該被用于其相稱的資源來規(guī)劃管理。但不幸的是這樣預(yù)期中的管理并沒有被實(shí)現(xiàn),即使近幾月AI實(shí)驗(yàn)室們已經(jīng)陷入了失去控制的“軍備競賽”關(guān)于升級和部署更加強(qiáng)大的數(shù)字意識,而這種數(shù)字意識的進(jìn)步是無人可預(yù)測或理解的,即使是它的創(chuàng)造者也不行。

????Contemporary AI systems are now becoming human-competitive at general tasks,and we must ask ourselves:?Should?we let machines flood our information channels with propaganda and untruth??Should?we automate away all the jobs, including the fulfilling ones??Should?we develop nonhuman minds that might eventually outnumber, outsmart,?obsolete and replace?us??Should?we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.?Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

????當(dāng)代的AI系統(tǒng)變得在一般任務(wù)上與人類有競爭性,而我們有必要捫心自問:我們應(yīng)該讓機(jī)械將謊言充斥我們的信息渠道嗎?我們應(yīng)該讓所有工作,包括那些讓我們感到滿足的工作都自動化嗎?我們應(yīng)該發(fā)展過于聰明,很顯然替代我們的非人意識嗎?我們會失去對文明的掌控嗎?這樣的決定一定不會被同意。升級這種強(qiáng)而有力的AI系統(tǒng)應(yīng)該只在我們有自信能確保其影響是積極地并且控制其危險(xiǎn)時才能進(jìn)行。

????Therefore,?we?call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

????因此,我們呼吁所有AI實(shí)驗(yàn)室立即停止訓(xùn)練強(qiáng)于GPT-4的AI系統(tǒng)至少6個月。對其的暫停應(yīng)為公開透明可被驗(yàn)證的。如果暫停不夠迅速,政府應(yīng)該采取行動去干涉使其停止。

????AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.This does?not?mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

????AI實(shí)驗(yàn)室和非獨(dú)立的專家應(yīng)該共同實(shí)行暫停并共享被外部獨(dú)立專家嚴(yán)格審查的AI的安全協(xié)議。協(xié)議應(yīng)該確保系統(tǒng)安全的處在有理由的懷疑下。這并不意味著AI迭代研究就此停止,而只是從危險(xiǎn)的不可預(yù)測的競賽中退出來。

????AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

????AI研究應(yīng)該重新聚焦于讓這個當(dāng)今最有力的系統(tǒng)變得準(zhǔn)確、安全、可解釋、透明、穩(wěn)定、一致、可信賴且忠誠。

????In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

????同時的,AI研究者在有力的AI管理政策下研究。其應(yīng)該至少包括:專門負(fù)責(zé)人工智能的有力的監(jiān)管機(jī)構(gòu); 能夠監(jiān)督和跟蹤高性能人工智能系統(tǒng)的大量計(jì)算力; 用以幫助區(qū)分真實(shí)與AI合成并跟蹤模型泄漏的溯源和水印系統(tǒng); 強(qiáng)大的審計(jì)和認(rèn)證相關(guān)生態(tài)系統(tǒng); 關(guān)于人工智能所造成的傷害的責(zé)任分配體系; 為人工智能安全研究提供足夠的基金; 以及有充足的資源的機(jī)構(gòu)來應(yīng)對人工智能可能對經(jīng)濟(jì)和政治造成的巨大破壞。(尤其是對民主的破壞)。

????Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.We can do so here.?Let's enjoy a long AI summer, not rush unprepared into a fall.

????人類可以享受AI帶來的繁榮的未來。通過對強(qiáng)大AI系統(tǒng)的成功構(gòu)筑,我們可以在收獲其回報(bào),設(shè)計(jì)明顯有益于所有人的系統(tǒng),并給予社會一個適應(yīng)的機(jī)會的過程中享受一個“AI之夏”。許多同樣對社會有災(zāi)難性影響的技術(shù)也被暫停了研究,而我們也可以對AI做同樣的處理。就讓我們享受漫長的“AI之夏”,而不是毫無準(zhǔn)備的落入秋天。

????We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them

?我們準(zhǔn)備了一些對常見問題的回答

:https://futureoflife.org/ai/faqs-about-flis-open-letter-calling-for-a-pause-on-giant-ai-experiments/

?

參考文獻(xiàn):

[1]

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Bostrom, N. (2016). Superintelligence. Oxford University Press.

Bucknall, B. S., & Dori-Hacohen, S. (2022, July).?Current and near-term AI as a potential existential risk factor.?In?Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society?(pp. 119-129).

Carlsmith, J. (2022).?Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.

Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.

Cohen, M. et al. (2022).?Advanced Artificial Agents Intervene in the Provision of Reward.?AI Magazine,?43(3) (pp. 282-293).

Eloundou, T., et al. (2023).?GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.

Hendrycks, D., & Mazeika, M. (2022).?X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.

Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.

[2]

Ordonez, V. et al. (2023, March 16).?OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.

Perrigo, B. (2023, January 12).?DeepMind CEO Demis Hassabis Urges Caution on AI. Time.

[3]

Bubeck, S. et al. (2023).?Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.

OpenAI (2023).?GPT-4 Technical Report. arXiv:2303.08774.

[4]

Ample legal precedent exists – for example, the widely adopted?OECD AI Principles?require that AI systems "function appropriately and do not pose unreasonable safety risk".

[5]

Examples include human cloning, human germline modification, gain-of-function research, and eugenics.


原網(wǎng)址:https://futureoflife.org/open-letter/pause-giant-ai-experiments/

(大概要科學(xué)上網(wǎng)

?


【翻譯】停止大型AI實(shí)驗(yàn):一封公開信的評論 (共 條)

分享到微博請遵守國家法律
镇赉县| 津南区| 子洲县| 栖霞市| 聂荣县| 炎陵县| 张家港市| 海晏县| 武陟县| 成都市| 浪卡子县| 六枝特区| 北流市| 靖宇县| 错那县| 南靖县| 伊春市| 于都县| 武清区| 沭阳县| 巴中市| 门头沟区| 大宁县| 华蓥市| 云龙县| 筠连县| 沙雅县| 五河县| 建平县| 依安县| 龙山县| 广西| 浦东新区| 财经| 台北市| 卢氏县| 南靖县| 丘北县| 遂溪县| 平乐县| 长泰县|