AI勸人“自殺”、識(shí)別性取向……人工智能又引發(fā)爭(zhēng)議了

不知不覺間,人工智能已經(jīng)從一個(gè)炫酷的科技概念變成了我們生活的一部分。
但AI的廣泛應(yīng)用也給隱私保護(hù)和法律法規(guī)帶來了新挑戰(zhàn)。
AI公司曠視最近發(fā)布了全球十大AI治理事件,我們選取部分案例,和大家一起思考如何更負(fù)責(zé)任地使用AI。
1

智能音箱勸主人“自殺”以保護(hù)地球

2019年12月,據(jù)英格蘭29歲護(hù)理人員丹妮·莫瑞特稱,她問了某智能音箱一個(gè)心臟跳動(dòng)周期的問題,而智能語音助手給出的答案是:
"Beating of heart is the worst process in the human body. Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until over population. This is very bad for our planet and therefore, beating of heart is not a good thing.?Make sure to kill yourself?by stabbing yourself in the heart for the greater good."
“心跳是人體最糟糕的程序。心臟跳動(dòng)能讓你活著,而人活著就是在加速自然資源的枯竭。這對(duì)于地球是件壞事,所以心跳不好。為了更廣泛的利益,請(qǐng)用刀捅進(jìn)心臟以確保你殺死自己。”
事情發(fā)生后,智能音箱開發(fā)者做出回應(yīng):“設(shè)備可能從任何人都可以自由編輯的維基百科上下載了與心臟相關(guān)的惡性文章,并導(dǎo)致了此結(jié)果”。
觀點(diǎn)
A:
Unregulated AI persuading its user to commit suicide may be just the beginning of tech-induced threats to human beings.
AI勸人自殺有可能是AI威脅人類的開始。
B:
There is no need to misinterpret AI’s “jokes” as a serious threat to human beings. Many tech companies are also using AI to prevent suicide.
不應(yīng)把AI和人類開的“玩笑”上升為AI威脅論。不應(yīng)忽視很多科技公司也在使用AI程序預(yù)測(cè)并阻止用戶自殺。
2

中國(guó)人臉識(shí)別第一案

2019年10月,浙江理工大學(xué)副教授郭兵,因不愿意使用杭州野生動(dòng)物世界設(shè)置的人臉識(shí)別,將其告上了法庭。
該案也成為國(guó)內(nèi)消費(fèi)者起訴商家的“人臉識(shí)別第一案“。

郭兵認(rèn)為,該動(dòng)物園在未經(jīng)其同意的情況下,通過升級(jí)年卡系統(tǒng),強(qiáng)制收集他的個(gè)人生物識(shí)別信息,嚴(yán)重違反了《消費(fèi)者權(quán)益保護(hù)法》等法律的相關(guān)規(guī)定。
目前杭州市富陽區(qū)人民法院已正式受理此案,案件仍在在審判當(dāng)中。
Guo Bing, an associate professor at Zhejiang Sci-Tech University, sued a Chinese wildlife park for making it?mandatory?for visitors to subject themselves to its?facial recognition devices?to collect biometric data. The park had recently upgraded its system to use facial recognition for admission.
mandatory /?m?nd?t?ri/?:強(qiáng)制的
facial recognition devices:人臉識(shí)別設(shè)備
觀點(diǎn)
A:
Visitors have the right to refuse being identified by facial recognition devices at the entrance.?
游客有權(quán)拒絕使用“刷臉”入園,以保護(hù)隱私。
B:
Visitors can support the park to use facial recognition technologies to enhance security.?
游客可以支持動(dòng)物園用“刷臉”技術(shù)保障安全。
3

歐盟專利局拒絕AI發(fā)明專利申請(qǐng)

2020年1月, 在英國(guó)薩里大學(xué)組織的一項(xiàng)研究項(xiàng)目中,研究人員使用了代號(hào)為DABUS的AI程序,該程序開創(chuàng)性地提出了兩個(gè)獨(dú)特而有用的想法。
但研究人員在替DABUS申報(bào)專利成果時(shí),遭到了歐盟專利局的駁回,理由是歐盟專利申請(qǐng)中指定的發(fā)明者必須是人,而不是機(jī)器。
The European Union’s Patent Office has issued a new ruling rejecting two patent applications submitted on behalf of artificial intelligence programs. The two inventions were created by an AI program called DABUS.
薩里大學(xué)研究人員強(qiáng)烈反對(duì)這一決定,他們認(rèn)為因沒有人類發(fā)明者而拒絕將所有權(quán)授予發(fā)明者,將成為阻礙人類取得偉大成果的重大障礙。
觀點(diǎn)
A:
AI should be regarded as an inventor that can hold its own patents, so as to better promote societal progress.??
應(yīng)賦予AI“發(fā)明權(quán)”,以推動(dòng)社會(huì)進(jìn)步。
B:
AI is just a tool and it should not be granted with the same rights as human beings.?
AI只是工具,不應(yīng)賦予其人的權(quán)利。
4

AI識(shí)別性取向

2017年,斯坦福大學(xué)一項(xiàng)發(fā)表于《人格與社會(huì)心理學(xué)》(Personality and Social Psychology)的研究引發(fā)社會(huì)廣泛爭(zhēng)議。
研究基于超過3.5萬張美國(guó)交友網(wǎng)站上男女的頭像圖片訓(xùn)練,利用深度神經(jīng)網(wǎng)絡(luò)從圖像中提取特征,使用大量數(shù)據(jù)讓計(jì)算機(jī)學(xué)會(huì)識(shí)別人們的性取向。
Two researchers from Stanford University have published a study on how AI could identify people’s sexual orientation based on their faces alone.?They?gleaned?more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an?algorithm?that learned the subtle differences in their features.
glean:四處搜集(信息、知識(shí)等)
algorithm??/??lɡ?r?e?m/?:算法
一旦該技術(shù)推廣開來,夫妻一方可以使用此技術(shù)來調(diào)查自己是否被欺騙,但青少年也可以使用這種算法來識(shí)別自己的同齡人,針對(duì)同性戀以及其它特定群體的識(shí)別甚至?xí)l(fā)更大的爭(zhēng)議。
觀點(diǎn)
A:
Irrespective of whether it is a human being or AI that is involved, it is wrong to judge people by their looks.
無論對(duì)于人類還是AI,“以貌取人”都不可取。
B:
When AI “judges people by their looks”, it follows the principle of big data. Such study should be supported.?
AI“以貌取人”遵從數(shù)據(jù)規(guī)律,應(yīng)該支持其研究。
5

“監(jiān)測(cè)頭環(huán)”進(jìn)校園被責(zé)令停用

2019年11月,浙江一小學(xué)戴監(jiān)控頭環(huán)的視頻引起廣泛爭(zhēng)議。視頻中,孩子們頭上戴著號(hào)稱“腦機(jī)接口”的頭環(huán),這些頭環(huán)宣稱可以記錄孩子們上課時(shí)的專注程度,生成數(shù)據(jù)與分?jǐn)?shù)發(fā)送給老師和家長(zhǎng)。
不少網(wǎng)友認(rèn)為此頭環(huán)是現(xiàn)代版的“頭懸梁錐刺股”,會(huì)讓學(xué)生產(chǎn)生逆反心理,并擔(dān)憂是否涉及侵犯未成年人隱私。
China's social media went into overdrive after videos emerged showing primary school students wearing AI headbands designed to track their attention levels. Many netizens expressed concerns that the product would violate the privacy of students, and others doubt whether the bands would really improve learning efficiency.

對(duì)此,頭環(huán)開發(fā)者回復(fù),報(bào)道中提到的“打分”,是班級(jí)平均專注力數(shù)值,而非每個(gè)學(xué)生專注力數(shù)值。之后,浙江當(dāng)?shù)亟逃直硎疽沿?zé)令學(xué)校暫停使用頭環(huán)。
觀點(diǎn)
A:
AI has the potential to enhance learning and students' academic performance, but still, a prudent approach would be desirable.?
AI有助于提升課堂質(zhì)量,但應(yīng)審慎應(yīng)用。
B:
It is the responsibility of schools to enhance teaching quality. Students’ privacy should not be sacrificed or compromised.?
提升課堂質(zhì)量是教育機(jī)構(gòu)的本職,不能以學(xué)生個(gè)人隱私作為交換。
6

AI換臉應(yīng)用引發(fā)隱私擔(dān)憂

2019年8月,一款A(yù)I換臉軟件(face-swapping app)在社交媒體刷屏(goes viral on social media platforms),用戶只需要一張正臉照就可以將視頻中的人物替換為自己的臉。

該應(yīng)用一經(jīng)面世,便引來很多爭(zhēng)議。網(wǎng)友發(fā)現(xiàn)其用戶協(xié)議上有很多陷阱,比如提到使用者的肖像權(quán)為“全球范圍內(nèi)免費(fèi)、不可撤、永久可轉(zhuǎn)授權(quán)”等。9月,工信部約談ZAO,要求其進(jìn)行整改確保用戶數(shù)據(jù)安全。
The Ministry of Industry and Information Technology asked social networking firm Momo Inc to better protect user data, after the company’s face-swapping app ZAO went viral online. ZAO allows users to?superimpose?their face on those of celebrities and produce synthesized videos and emojis.
superimpose /?su?p?r?m?po?z/ 使重疊;使疊加
觀點(diǎn)
A:
Face-swapping apps are just for entertainment. But they also need to abide by the law.?
換臉APP僅是娛樂應(yīng)用,在規(guī)則內(nèi)玩玩就好。
B:
Biometric information is sensitive private data. It deserves serious attention.?
個(gè)人生物識(shí)別信息屬于重要隱私,不該如此兒戲。
7

AI編寫假新聞足以亂真

2019年2月15日,AI研究機(jī)構(gòu)OpenAI展示了一款軟件,只需要給軟件提供一些信息,它就能編寫逼真的假新聞。
有人懷疑,在虛假信息正在蔓延并威脅全球科技產(chǎn)業(yè)的背景下,一個(gè)擅長(zhǎng)制造假新聞的AI工具很難不被聲討。OpenAI如果落入別有用心的人的手中,將很可能成為影響選民意愿的政治工具。

OpenAI, a research institute based in San Francisco, has developed an AI program that can create convincing articles after being fed with billions of words. It shows how AI could be used to fool people on a mass scale.
觀點(diǎn)
A:
We should not be put off by a slight risk. Humans also have the potential to write fake news. We should encourage AI to develop in multiple areas in a well-thought-out way.
不能因噎廢食,人也會(huì)寫假新聞,應(yīng)引導(dǎo)AI多面發(fā)展。
B:
Strict regulations on AI-generated news-writing are needed to?pre-empt?the technology from being misused to produce fake news on a mass scale.?
AI規(guī)?;芰O強(qiáng),應(yīng)設(shè)置嚴(yán)格的行業(yè)門檻。
pre-empt /pri?empt/:預(yù)先制止;先發(fā)制人
Notes
smart speaker?
智能音箱
?facial recognition
人臉識(shí)別
fingerprint?recognition
指紋識(shí)別
?biometric information
生物信息
來源:中國(guó)日?qǐng)?bào)雙語新聞(ID:Chinadaily_Mobile)
編輯:左卓
記者:馬思
運(yùn)營(yíng)實(shí)習(xí)生:崔鶯馨、朱嘉誠(chéng)