最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊(cè)

《暫停人工智能實(shí)驗(yàn)》(全球公開(kāi)信)

2023-04-06 20:44 作者:太宇可斯  | 我要投稿

全球知名的科學(xué)家、資本家馬斯克、圖靈獎(jiǎng)得主等千名專(zhuān)家發(fā)表聯(lián)名公開(kāi)信,呼吁暫停訓(xùn)練任何比GPT-4更強(qiáng)的人工智能。

原文:Pause Giant AI Experiments: An Open Letter

此為中文原文

正如廣泛的研究[1]所表明以及頂尖人工智能實(shí)驗(yàn)室所承認(rèn)[2]的那樣,等同或者強(qiáng)于人類(lèi)智慧的人工智能可能給社會(huì)和人類(lèi)文明帶來(lái)巨大的風(fēng)險(xiǎn)。已被廣泛認(rèn)可的阿西洛馬人工智能原則提出,先進(jìn)的人工智能將給地球生命的發(fā)展歷程帶來(lái)深遠(yuǎn)的改變,而相應(yīng)地,也應(yīng)以足夠的資源和社會(huì)關(guān)注度來(lái)對(duì)其進(jìn)行規(guī)劃和管理。但不幸的是,我們當(dāng)下對(duì)人工智能的規(guī)劃和管理遠(yuǎn)遠(yuǎn)達(dá)不到這種程度,而與此同時(shí),各個(gè)人工智能實(shí)驗(yàn)室都被主動(dòng)或被動(dòng)地拖入一場(chǎng)失控的競(jìng)賽中,爭(zhēng)相開(kāi)發(fā)和部署越來(lái)越強(qiáng)大的人工智能,但包括它們的創(chuàng)造者在內(nèi)的所有人,都無(wú)法完全理解、預(yù)測(cè)和控制這些數(shù)字大腦。

當(dāng)下,人工智能系統(tǒng)已經(jīng)慢慢可以在一般的任務(wù)處理中與人類(lèi)相媲美[3],我們必須要問(wèn)自己:我們?要讓機(jī)器使用各種宣傳和虛假內(nèi)容來(lái)主導(dǎo)我們的信息渠道嗎?我們?要把我們所有的工作都交給機(jī)器來(lái)自動(dòng)化處理嗎,即便有些工作本可以賦予我們獨(dú)特的成就感?我們?要?jiǎng)?chuàng)造一種最終可能在數(shù)量和能力上都超過(guò)我們的非人類(lèi)的智慧,并讓它們淘汰和取代我們嗎?我們?要冒這種失去對(duì)我們自身文明控制的風(fēng)險(xiǎn)嗎?這些問(wèn)題的答案不應(yīng)該由未經(jīng)人民選舉和承認(rèn)的科技行業(yè)的控制人來(lái)代替我們回答。只有當(dāng)我們確信它們帶來(lái)的影響將是積極的,它們承載的風(fēng)險(xiǎn)將是可控的,我們才可以發(fā)展強(qiáng)大的人工智能系統(tǒng)。?而這份自信的來(lái)源必須有所依據(jù),并且隨著人工智能對(duì)人類(lèi)社會(huì)影響的擴(kuò)大而增強(qiáng)。OpenAI在最近關(guān)于通用人工智能的聲明中說(shuō),“在某個(gè)時(shí)間之后,對(duì)先進(jìn)人工智能的訓(xùn)練預(yù)先進(jìn)行獨(dú)立審查可能會(huì)變得很重要,同時(shí),領(lǐng)域內(nèi)的頂尖力量要達(dá)成一個(gè)約定,以限制用于新模型的計(jì)算資源的增長(zhǎng)速度?!?我們同意這個(gè)觀點(diǎn),而那個(gè)時(shí)間就是現(xiàn)在。

因此,我們呼吁所有人工智能實(shí)驗(yàn)室立即暫停訓(xùn)練任何強(qiáng)于GPT-4的人工智能,時(shí)間為至少六個(gè)月。?這個(gè)暫停應(yīng)該是公開(kāi)的,可以經(jīng)過(guò)核查確認(rèn)的,并且要涵蓋所有主要的人工智能開(kāi)發(fā)組織和團(tuán)體。如果此暫停不能在短期內(nèi)得到實(shí)施和確認(rèn),政府應(yīng)該介入并頒布暫停法令。

人工智能實(shí)驗(yàn)室和相關(guān)領(lǐng)域?qū)<覒?yīng)利用這一暫停時(shí)間,共同制定和部署一套針對(duì)高等級(jí)人工智能設(shè)計(jì)和研發(fā)的共享安全協(xié)議,并由獨(dú)立的專(zhuān)家進(jìn)行嚴(yán)格的監(jiān)督和審查。這份協(xié)議應(yīng)該確保任何遵守該協(xié)議的人工智能在除合理的風(fēng)險(xiǎn)[4]之外是安全的。這并不?代表暫停人工智能的總體發(fā)展,只是在這個(gè)危險(xiǎn)的競(jìng)賽中暫時(shí)退一小步,以避免它將我們帶向越來(lái)越大、擁有突現(xiàn)能力且不可預(yù)測(cè)的人工智能模型黑箱。

人工智能的研究和開(kāi)發(fā)應(yīng)該重新聚焦于使當(dāng)前最強(qiáng)大和先進(jìn)的人工智能系統(tǒng)更加準(zhǔn)確、安全、可解釋、透明、穩(wěn)健、一致、值得信賴和忠誠(chéng)。

與此同時(shí),人工智能開(kāi)發(fā)者應(yīng)該與政策制定者一同,大幅加快開(kāi)發(fā)穩(wěn)健可靠的人工智能管理系統(tǒng)。該系統(tǒng)至少應(yīng)當(dāng)包括:新設(shè)立的專(zhuān)門(mén)針對(duì)人工智能的強(qiáng)有力的監(jiān)管機(jī)構(gòu);對(duì)高等級(jí)人工智能系統(tǒng)以及可用于人工智能的大型計(jì)算池的監(jiān)督和跟蹤;來(lái)源和水印標(biāo)記系統(tǒng),以區(qū)分真實(shí)和生成內(nèi)容,同時(shí)跟蹤模型泄露;強(qiáng)有力的審查、認(rèn)證和生態(tài)管理系統(tǒng);關(guān)于人工智能所造成危害的責(zé)任制度;用于人工智能安全技術(shù)研究的公共資金;應(yīng)對(duì)人工智能可能導(dǎo)致的對(duì)經(jīng)濟(jì)和政治(尤其是民主)的巨大破壞的機(jī)構(gòu)和充足資源。

在人工智能的幫助下,人類(lèi)將迎來(lái)一個(gè)繁榮的未來(lái)。在成功創(chuàng)造出強(qiáng)大的人工智能系統(tǒng)之后,我們步入了生機(jī)勃勃的人工智能之夏,我們收獲成果,利用它們?yōu)樗腥藙?chuàng)造美好生活,同時(shí)給社會(huì)一個(gè)適應(yīng)和喘息的機(jī)會(huì)。我們的社會(huì)曾經(jīng)為一些可能帶來(lái)災(zāi)害性影響的技術(shù)按下了暫停鍵[5],現(xiàn)在,我們也可以這么做。讓我們慢慢享受一個(gè)漫長(zhǎng)而美好的人工智能之夏,而不是毫無(wú)準(zhǔn)備地匆忙踏入危亡之秋。

Pause Giant AI Experiments: An Open Letter

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]?and acknowledged by top AI labs.[2]?As stated in the widely-endorsed?Asilomar AI Principles,?Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3]?and we must ask ourselves:?Should?we let machines flood our information channels with propaganda and untruth??Should?we automate away all the jobs, including the fulfilling ones??Should?we develop nonhuman minds that might eventually outnumber, outsmart,?obsolete and replace?us??Should?we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.?Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.?This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's?recent statement regarding artificial general intelligence, states that?"At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models."?We agree. That point is now.

Therefore,?we?call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4]?This does?not?mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]??We can do so here.?Let's enjoy a long AI summer, not rush unprepared into a fall.

注釋和引用

[1]

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big???. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Bostrom, N. (2016). Superintelligence. Oxford University Press.

Bucknall, B. S., & Dori-Hacohen, S. (2022, July).?Current and near-term AI as a potential existential risk factor.?In?Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society?(pp. 119-129).

Carlsmith, J. (2022).?Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.

Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.

Cohen, M. et al. (2022).?Advanced Artificial Agents Intervene in the Provision of Reward.?AI Magazine,?43(3) (pp. 282-293).

Eloundou, T., et al. (2023).?GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.

Hendrycks, D., & Mazeika, M. (2022).?X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.

Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.

Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.

[2]

Ordonez, V. et al. (2023, March 16).?OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.

Perrigo, B. (2023, January 12).?DeepMind CEO Demis Hassabis Urges Caution on AI. Time.

[3]

Bubeck, S. et al. (2023).?Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.

OpenAI (2023).?GPT-4 Technical Report. arXiv:2303.08774.

[4]

Ample legal precedent exists – for example, the widely adopted?OECD AI Principles?require that AI systems "function appropriately and do not pose unreasonable safety risk".

[5]

Examples include human cloning, human germline modification, gain-of-function research, and eugenics.




《暫停人工智能實(shí)驗(yàn)》(全球公開(kāi)信)的評(píng)論 (共 條)

分享到微博請(qǐng)遵守國(guó)家法律
临城县| 瓦房店市| 阿图什市| 锡林浩特市| 通河县| 磐安县| 阿坝| 定州市| 仪征市| 沧源| 四会市| 贵德县| 象州县| 田东县| 中卫市| 兰溪市| 错那县| 象山县| 石阡县| 扶绥县| 曲沃县| 天台县| 阿巴嘎旗| 东乡县| 青神县| 台东市| 会东县| 旺苍县| 屯留县| 南京市| 高唐县| 聊城市| 交口县| 邵阳市| 云和县| 泰州市| 龙泉市| 崇州市| 汪清县| 柏乡县| 靖江市|