外刊閱讀:人工智能造成的人身傷害,誰來負(fù)責(zé)?
Source: March 2023, Scientific American
?
Who is liable when AI kills?
人工智能造成的人身傷害,誰來負(fù)責(zé)?
“We need to protect people from faulty AI without curbing innovation”.
“我們需要在不抑制創(chuàng)新的情況下保護(hù)人們免受有缺陷的人工智能的影響”。?
?
Authors: George Maliha is a third-year internal medicine resident at the University of Pennsylvania Health System. Ravi B. Parikh is an oncologist and policy researcher at the University of Pennsylvania who develops ways to integrate AI into clinical care.
George Maliha是賓夕法尼亞大學(xué)衛(wèi)生系統(tǒng)內(nèi)科住院醫(yī)生。Ravi B. Parikh是賓夕法尼亞大學(xué)(University of Pennsylvania)的腫瘤學(xué)家和政策研究員,致力于開發(fā)將AI整合到臨床治療中的方法。
?
正文:?
Who is responsible when artificial intelligence harms someone? A California jury may soon have to decide. In December 2019 a person driving a Tesla with an AI navigation system killed two people in an accident. The driver faces up to 12 years in prison. Several federal agencies are investigating Tesla crashes, and the U.S. Department of Justice has opened a criminal probe into how Tesla markets its self-driving system. And California’s Motor Vehicles Department is examining its use of AI-guided driving features.
當(dāng)人工智能傷害某人時(shí),誰該負(fù)責(zé)?加州陪審團(tuán)可能很快就要做出決定。2019年12月,在一起事故中,一名駕駛裝有人工智能導(dǎo)航系統(tǒng)的特斯拉的人導(dǎo)了兩人喪生。這名司機(jī)面臨最高12年的監(jiān)禁。幾個(gè)聯(lián)邦機(jī)構(gòu)正在調(diào)查特斯拉的車禍,美國司法部已經(jīng)對(duì)特斯拉如何營銷其自動(dòng)駕駛系統(tǒng)展開了刑事調(diào)查。加州機(jī)動(dòng)車輛部門正在研究其使用人工智能引導(dǎo)的駕駛功能。
?
AI navigation system 人工智能導(dǎo)航系統(tǒng)
U.S. Department of Justice美國司法部
opened a criminal probe into展開了刑事調(diào)查 ?
注意這里probe是一個(gè)名詞,意思是調(diào)查。
??
?
Our current liability system—used to determine responsibility and payment for injuries—is unprepared for AI. Liability rules were designed for a time when humans caused most injuries. But with AI, errors may occur without any direct human input. The liability system needs to adjust accordingly. Bad liability policy won’t just stifle AI innovation. It will also harm patients and consumers.
我們目前的責(zé)任體系——用來確定傷害的責(zé)任和賠償——還沒有為人工智能做好準(zhǔn)備。責(zé)任規(guī)則是為人類造成大多數(shù)傷害的時(shí)代設(shè)計(jì)的。但對(duì)于人工智能,在沒有任何直接人工輸入的情況下,錯(cuò)誤可能會(huì)發(fā)生。責(zé)任制度需要相應(yīng)調(diào)整。糟糕的責(zé)任政策不僅會(huì)扼殺人工智能創(chuàng)新。它還會(huì)傷害病人和消費(fèi)者。
?
The time to think about liability is now—as AI becomes ubiquitous but remains underregulated. AI-based systems have already contributed to injuries. In 2019 an AI algorithm misidentified a suspect in an aggravated assault, leading to a mistaken arrest. In 2020, during the height of the COVID pandemic, an AI-based mental health chatbot encouraged a simulated suicidal patient to take her own life.
現(xiàn)在是考慮責(zé)任的時(shí)候了——人工智能變得無處不在,但仍然監(jiān)管不足?;谌斯ぶ悄艿南到y(tǒng)已經(jīng)造成了傷害。2019年,一個(gè)人工智能算法在一起惡意傷害中錯(cuò)誤識(shí)別了一名嫌疑人,導(dǎo)致錯(cuò)誤逮捕。2020年,在新冠肺炎大流行最嚴(yán)重的時(shí)期,一個(gè)基于人工智能的心理健康聊天機(jī)器人鼓勵(lì)一名模擬自殺的患者結(jié)束自己的生命。
?
ubiquitous: being everywhere, very common 無處不在
algorithm(尤指計(jì)算機(jī))算法
aggravated assault惡意傷害
chatbot聊天機(jī)器人
? ??
Getting the liability landscape right is essential to unlocking AI’s potential. Uncertain rules and the prospect of costly litigation will discourage the investment, development and adoption of AI in industries ranging from health care to autonomous vehicles.
正確把握責(zé)任格局對(duì)于釋放人工智能的潛力至關(guān)重要。不確定的規(guī)則和代價(jià)高昂的訴訟前景,將阻礙從醫(yī)療保健到自動(dòng)駕駛汽車等行業(yè)對(duì)人工智能的投資、開發(fā)和采用。
?
litigation訴訟,起訴
??
Currently liability inquiries usually start—and stop—with the person who uses the algorithm. Granted, if someone misuses an AI system or ignores its warnings, that person should be liable. But AI errors are often not the fault of the user. Who can fault an emergency room physician for an AI algorithm that misses papil-ledema—swelling of a part of the retina? An AI’s failure to detect the condition could delay care and possibly cause a patient to lose their sight. Yet papilledema is challenging to diagnose without an ophthalmologist’s examination.
目前,責(zé)任調(diào)查通常由使用算法的人開始,也由使用算法的人停止。當(dāng)然,如果有人誤用了人工智能系統(tǒng)或忽視了它的警告,這個(gè)人應(yīng)該承擔(dān)責(zé)任。但人工智能的錯(cuò)誤往往不是用戶的錯(cuò)。誰能因?yàn)槿斯ぶ悄芩惴┰\了視神經(jīng)乳頭水腫(視網(wǎng)膜的一部分視乳頭水腫)而去指責(zé)急診室醫(yī)生?人工智能未能檢測(cè)到病情可能會(huì)延誤治療,并可能導(dǎo)致患者失明。然而,在沒有眼科醫(yī)師檢查的情況下,很難診斷視神經(jīng)乳頭水腫。
?
AI is constantly self-learning, meaning it takes information and looks for patterns in it. It is a “black box,” which makes it challenging to know what variables contribute to its output. This further complicates the liability question. How much can you blame a physician for an error caused by an unexplainable AI? Shifting the blame solely to AI engineers does not solve the issue. Of course, the engineers created the algorithm in question. But could every Tesla Autopilot accident be prevented by more testing before product launch?
人工智能在不斷地自我學(xué)習(xí),這意味著它獲取信息并在其中尋找模式。它是一個(gè)“黑箱”,因此很難知道哪些變量對(duì)其輸出有貢獻(xiàn)。這使責(zé)任問題進(jìn)一步復(fù)雜化。一個(gè)無法解釋的人工智能造成的錯(cuò)誤,你能責(zé)怪醫(yī)生多少?把責(zé)任完全推卸給人工智能工程師并不能解決問題。當(dāng)然,工程師們創(chuàng)造了這個(gè)算法。但是否每一次特斯拉自動(dòng)駕駛事故都可以通過在產(chǎn)品發(fā)布前進(jìn)行更多的測(cè)試來預(yù)防嗎?
?
The key is to ensure that all stakeholders—users, developers and everyone else along the chain—bear enough liability to ensure AI safety and effectiveness, though not so much they give up on AI. To protect people from faulty AI while still promoting innovation, we propose three ways to revamp traditional liability frameworks.
關(guān)鍵是要確保所有利益相關(guān)者——用戶、開發(fā)人員和其他所有人——承擔(dān)足夠的責(zé)任來確保人工智能的安全性和有效性,盡管他們不會(huì)放棄人工智能。為了在促進(jìn)創(chuàng)新的同時(shí)保護(hù)人們免受有缺陷的人工智能的影響,我們提出了三種方法來改進(jìn)傳統(tǒng)的責(zé)任框架。
?
First, insurers must protect policyholders from the costs of being sued over an AI injury by test-ing and validating new AI algorithms prior to use. Car insurers have similarly been comparing and testing automobiles for years. An independent safety system can provide AI stakeholders with a predictable liability system that adjusts to new technologies and methods.
首先,保險(xiǎn)公司必須在使用新的人工智能算法之前測(cè)試和驗(yàn)證,以保護(hù)投保人免受因人工智能傷害而被起訴的成本。多年來,汽車保險(xiǎn)公司也一直在對(duì)汽車進(jìn)行類似的比較和測(cè)試。 一個(gè)獨(dú)立的保障系統(tǒng)可以為AI利益相關(guān)者提供一個(gè)可預(yù)測(cè)的責(zé)任系統(tǒng),以適應(yīng)新技術(shù)和方法。
?
Second, some AI errors should be litigated in courts with expertise in these cases. These tribunals could specialize in particular technologies or issues, such as dealing with the interaction of two AI systems (say, two autonomous vehicles that crash into each other). Such courts are not new: in the U.S., these courts have adjudicated vaccine injury claims for decades.
第二,一些人工智能錯(cuò)誤應(yīng)該在有相關(guān)專業(yè)知識(shí)的法庭上提起訴訟。這些法庭可以專門處理特定的技術(shù)或問題,比如處理兩個(gè)人工智能系統(tǒng)的交互(比如兩輛相撞的自動(dòng)駕駛汽車)。這樣的法院并不新鮮:在美國,這些法院裁決疫苗傷害索賠已有幾十年的歷史。
?
Third, regulatory standards from federal authorities such as the U.S. Food and Drug Administration or the National Highway Traffic Safety Administration could offset excess liability for devel-opers and users. For example, federal regulations and legislation have replaced certain forms of liability for medical devices. Regulators ought to proactively focus on standard processes for AI de-velopment. In doing so, they could deem some AIs too risky to in-troduce to the market without testing, retesting or validation. This would allow agencies to remain nimble and prevent AI-related injuries, without AI developers incurring excess liability.
第三,來自聯(lián)邦當(dāng)局的監(jiān)管標(biāo)準(zhǔn),如美國食品和藥物管理局或國家高速公路交通安全管理局可以抵消開發(fā)商和用戶的超額責(zé)任。例如,聯(lián)邦法規(guī)和立法已經(jīng)取代了醫(yī)療器械的某些形式的責(zé)任。監(jiān)管機(jī)構(gòu)應(yīng)該積極關(guān)注人工智能開發(fā)的標(biāo)準(zhǔn)流程。在這種情況下,他們可能認(rèn)為一些人工智能風(fēng)險(xiǎn)太大,在沒有測(cè)試、重新測(cè)試或驗(yàn)證的情況下引入市場(chǎng)。這將使機(jī)構(gòu)保持靈活,防止與人工智能相關(guān)的傷害,而人工智能開發(fā)人員不會(huì)承擔(dān)過多的責(zé)任。
?
?
?
?
?