Daily Translation #7
“殺手AI”的確存在,以下是我們?nèi)绾卧谛率澜缰斜3职踩⒗碇呛蛷?qiáng)壯的建議。
人工智能(AI)的快速發(fā)展令人驚嘆。從醫(yī)療保健到財(cái)政金融,AI正改變著我們的產(chǎn)業(yè)體系,并且有著能夠?qū)⑷祟惿a(chǎn)力提升到前所未有的水平的巨大潛力。然而這種振奮人心的發(fā)展前景也伴隨著民眾和一些專家的擔(dān)憂,即“殺手AI”的出現(xiàn)。在這樣一個創(chuàng)新能夠以意想不到的方式改變社會的世界里,科幻作品中AI給人們帶來的恐懼是否會變成現(xiàn)實(shí)?
為了解答類似的疑問,我們最近為喬治梅森大學(xué)的莫卡特斯中心發(fā)表了一篇政策簡報(bào),題名為“關(guān)于定義‘殺手AI’”。在簡報(bào)中,我們?yōu)?/span>AI系統(tǒng)制定了一個新的框架來評估其對人類潛在的威脅,這是為應(yīng)對AI帶來的挑戰(zhàn)所邁出的重要一步,同時也能夠使AI更加安全、負(fù)責(zé)任地融入社會。
AI已經(jīng)展現(xiàn)出了它所具有的變革性力量,其為社會上一些最棘手的問題提供了解決方案。它能夠協(xié)助醫(yī)療診斷,促進(jìn)科學(xué)研究,簡化商業(yè)流程。通過自動化處理重復(fù)性工作,AI將人類從中解放出來,使后者能夠?qū)W⒂诟邔哟蔚穆氊?zé)和創(chuàng)造力。
AI的前途無量。從樂觀的角度來看,想象一個由AI驅(qū)動的經(jīng)濟(jì)社會并非天馬行空。到那時,只需經(jīng)過一段時間的調(diào)整,人們會變得更加健康與富足,遠(yuǎn)不用像今天這樣做如此多的工作。
然而,確保這樣的未來能夠安全實(shí)現(xiàn)是極為重要的。據(jù)我們所知,我們嘗試評估AI對現(xiàn)實(shí)世界安全造成的風(fēng)險(xiǎn),也是全面定義“殺手AI”現(xiàn)象的首次嘗試。
我們將其定義為,由于設(shè)定或意外結(jié)果,AI系統(tǒng)對人類直接造成傷害或死亡。重點(diǎn)是,這一定義同時包含和區(qū)分了物理AI系統(tǒng)和虛擬AI系統(tǒng),并且意識到AI可能會通過不同的形式對人類造成潛在傷害。
雖然相關(guān)的例子很難闡述,但科幻作品至少能夠幫助人們理解物理AI和虛擬AI是如何對人體造成傷害的。電影《終結(jié)者》中的角色就是物理AI所帶來風(fēng)險(xiǎn)的一個典型案例。但是虛擬AI有更大的潛在危險(xiǎn),最新的《碟中諜》電影就是一個較為極端的例子。的確,我們的世界正變得緊緊相連,就連關(guān)鍵基礎(chǔ)實(shí)施也未能幸免。
我們所提出的框架以系統(tǒng)化的方式評估AI,其重點(diǎn)是優(yōu)先考慮多數(shù)人的福利。我們對AI的安全性和風(fēng)險(xiǎn)因素進(jìn)行嚴(yán)格評估,不僅關(guān)注其對人造成傷害的可能性,還關(guān)注其造成傷害的嚴(yán)重性。此種框架能夠揭示先前未被注意到的威脅,也能夠增強(qiáng)與AI相關(guān)的風(fēng)險(xiǎn)處理能力。
為實(shí)現(xiàn)這一點(diǎn),我們的框架需要更深入的考慮和理解AI被改變用途和濫用的可能性,以及使用AI所產(chǎn)生的最終影響。此外,在處理這些問題時,我們強(qiáng)調(diào)跨學(xué)科相關(guān)從業(yè)者評估的重要性。這樣能夠以一個更加全面均衡的視角審視AI的發(fā)展與應(yīng)用。
這一評估框架能夠?yàn)椤皻⑹?/span>AI”的全面立法,恰當(dāng)管理和倫理討論提供基礎(chǔ)。重點(diǎn)關(guān)注保障人類生命和確保大多數(shù)人的福利,我們能夠幫助立法機(jī)構(gòu)解決或優(yōu)先處理由任何潛在的“殺手AI”所引發(fā)的最緊迫的問題。
強(qiáng)調(diào)多種跨學(xué)科相關(guān)從業(yè)者參與的重要性能夠鼓勵不同背景的人積極參與到討論當(dāng)中。通過這樣,我們希望未來的立法能夠更加全面,相關(guān)討論能夠更加有成效。
雖然這是決策者、行業(yè)領(lǐng)導(dǎo)者、研究人員和其他相關(guān)從業(yè)者對AI進(jìn)行嚴(yán)格評估的富有潛力的關(guān)鍵工具,該框架同樣強(qiáng)調(diào)在AI安全領(lǐng)域進(jìn)行進(jìn)一步研究、審查并積極發(fā)揮主動性的緊迫性。這在這樣一個發(fā)展如此之快的領(lǐng)域是一項(xiàng)挑戰(zhàn)。幸運(yùn)的是,研究者能夠有充實(shí)的機(jī)會在這項(xiàng)科技中學(xué)有所成。
AI應(yīng)當(dāng)被用于做善事,應(yīng)當(dāng)被用于改善人類生活,而不是置人類于危險(xiǎn)之境。通過制定有效的政策和方法來應(yīng)對AI安全問題,社會能夠充分發(fā)揮這種新興技術(shù)的潛力,同時保護(hù)自身免受潛在的危害。我們的框架是實(shí)現(xiàn)這一任務(wù)的有力工具。無論AI是否會真的給人們帶來恐懼,如果我們能夠駕馭這項(xiàng)振奮人心的前沿技術(shù)同時避免它所帶來的意外影響,我們將會過上更好的生活。
?
重點(diǎn)詞匯:
nothing short of remarkable:非常了不起,令人印象深刻
looming:逼近的,迫近的
policy brief:政策簡報(bào)
address the challenges:應(yīng)對挑戰(zhàn)
streamlines:使成流線型,簡化….的過程
rigorous:嚴(yán)密的,嚴(yán)苛的
mitigate:減輕,緩和
Original Article:
'Killer AI' is real. Here's how we stay safe, sane and strong in a brave new world
The rapid advancement of artificial intelligence (AI) has been nothing short of remarkable. From health care to finance, AI is transforming industries and has the potential to elevate human productivity to unprecedented levels. However, this exciting promise is accompanied by a looming?concern among the public and some experts: the emergence of "Killer AI." In a world where innovation has already changed society in unexpected ways, how do we separate legitimate fears from those that should still be reserved for fiction?
To help answer questions like these, we recently released a?policy brief for the Mercatus Center at George Mason University titled "On Defining ‘Killer AI.'" In it, we offer a novel framework to assess AI systems for their potential to cause harm, an important step towards?addressing the challenges posed by AI and ensuring its responsible integration into society.
AI has already shown its transformative power, offering solutions to some of society's most pressing problems. It enhances medical diagnoses, accelerates scientific research, and streamlines processes across the business world. By automating repetitive tasks, AI frees up human talent to focus on higher-level responsibilities and creativity.
The potential for good is boundless. While optimistic, it’s not particularly unreasonable to imagine an AI-fueled economy where, after a period of adjustment, people are significantly healthier and more prosperous while working far less than we do today.
It is important, however, to ensure this potential is achieved safely. To our knowledge, our attempt to assess AI’s real-world safety risks also marks the first attempt to comprehensively define the phenomenon of "Killer AI."
We define it as AI systems that directly cause physical harm or death, whether by design or due to unforeseen consequences. Importantly, the definition both encompasses and distinguishes between physical and virtual AI systems, recognizing that harm could potentially arise from various forms of AI.
Although their examples are complex to understand, science fiction can at least help illustrate the concept of physical and virtual AI systems leading to tangible physical harm. The Terminator character has long been used as an example of the risks of physical AI systems. However, potentially more dangerous are virtual AI systems, an extreme example of which can be found in the newest "Mission Impossible" movie. It is realistic to say that our world is becoming increasingly interconnected, and our critical infrastructure is not exempt.
Our proposed framework offers a systematic approach to assess AI systems, with a key focus on prioritizing the welfare of many over the interests of the few. By considering not just the possibility of harm but also its severity, we allow for a rigorous evaluation of AI systems’ safety and risk factors. It has the potential to uncover previously unnoticed threats and enhance our ability to mitigate risks associated with AI.
Our framework enables this by requiring a deeper consideration and understanding of the potential for an AI system to be repurposed or misused and the eventual repercussions of an AI system’s use. Moreover, we stress the importance of interdisciplinary stakeholder assessment in approaching these considerations. This will permit a more balanced perspective on the development and deployment of these systems.
This evaluation can serve as a foundation for comprehensive legislation, appropriate regulation, and ethical discussions on Killer AI. Our focus on preserving human life and ensuring the welfare of many can help legislative efforts address and prioritize the most pressing concerns elicited by any potential Killer AIs.
The emphasis on the importance of multiple, interdisciplinary stakeholder involvement might encourage those of different backgrounds to become more involved in the ongoing discussion. Through this, it is our hope that future legislation can be more comprehensive and the surrounding discussion can be better informed.
While a potentially critical tool for policymakers, industry leaders, researchers, and other stakeholders to evaluate AI systems rigorously, the framework also underscores the urgency for further research, scrutiny, and proactivity in the field of AI safety. This will be challenging in such a fast-moving field. Fortunately, researchers will be motivated by the ample opportunities to learn from the technology.
AI should be a force for good—one that enhances human lives, not one that puts them in jeopardy. By developing effective policies and approaches to address the challenges of AI safety, society can harness the full potential of this emerging technology while safeguarding against potential harm. The framework presented here is a valuable tool in this mission. Whether or not fears about AI prove true or unfounded, we’ll be left better off if we can navigate this exciting frontier while avoiding its unintended consequences.
?
原網(wǎng)址:
https://www.foxnews.com/opinion/killer-ai-real-safe-sane-strong-brave-new-world