【TED演講稿】人工智能是否可以真正理解人類(lèi)?/英語(yǔ)演講文字稿
TED演講者:Alona Fyshe / 阿羅娜·費(fèi)舍
演講標(biāo)題:Does AI actually understand us? / 人工智能是否可以真正理解人類(lèi)?
內(nèi)容概要:Is AI as smart as it seems? Exploring the "brain" behind machine learning, neural networker Alona Fyshe delves into the language processing abilities of talkative tech (like the groundbreaking chatbot and internet obsession ChatGPT) and explains how different it is from your own brain -- even though it can sound convincingly human.
人工智能是否像它看起來(lái)那樣聰明?在探索機(jī)器學(xué)習(xí)背后的“大腦”時(shí),神經(jīng)網(wǎng)絡(luò)學(xué)者阿羅娜·費(fèi)舍(Alona Fyshe)深入研究了聊天技術(shù)(如開(kāi)創(chuàng)性的聊天機(jī)器人、互聯(lián)網(wǎng)風(fēng)潮中的ChatGPT)背后的語(yǔ)言處理能力,并解釋了它與你自己的大腦有多大不同——盡管看起來(lái),它非常能讓你相信它和人類(lèi)一樣。
******************************************************************
【1】People are funny.
人是非常有趣的。
【2】We're constantly trying to understand and interpret the world around us.
我們一直在試圖理解和解釋 我們周?chē)氖澜纭?/p>
【3】I live in a house with two black cats, and let me tell you, every time I see a black, bunched up sweater out of the corner of my eye,
我家里有兩只黑貓, 我可以告訴你, 每當(dāng)我的余光瞥見(jiàn) 一件打結(jié)的黑色毛衣,
【4】I think it's a cat.
我都會(huì)認(rèn)為那是一只貓。
【5】It's not just the things we see.
不只是我們看到的東西。
【6】Sometimes we attribute more intelligence than might actually be there.
我們有時(shí)以為一些東西有超常的智慧, 但實(shí)際上未必有。
【7】Maybe you've seen the dogs on TikTok.
比如,你也許在TikTok上 看到過(guò)狗狗的視頻。
【8】They have these little buttons that say things like "walk" or "treat."
上面有一些小按鈕, 寫(xiě)著“遛遛”或是“要吃的”。
【9】They can push them to communicate some things with their owners, and their owners think they use them to communicate some pretty impressive things.
這些狗狗能用這些按鈕 和它們的主人交流, 它們的主人也以為用這些按鈕就能 讓狗狗做一些令人驚奇的事情。
【10】But do the dogs know what they're saying?
但狗狗知道它們?cè)谡f(shuō)什么嗎?
【11】Or perhaps you've heard the story of Clever Hans the horse, and he could do math.
或許你聽(tīng)過(guò)《聰明的漢斯》的故事, 這匹馬居然能做數(shù)學(xué)題。
【12】And not just like, simple math problems, really complicated ones, like, if the eighth day of the month falls on a Tuesday, what's the date of the following Friday?
不僅僅是簡(jiǎn)單的數(shù)學(xué)計(jì)算, 而是非常復(fù)雜的問(wèn)題, 比如,如果一個(gè)月的第八天是星期二, 那么下一個(gè)星期五的日期是什么?
【13】It's like, pretty impressive for a horse.
對(duì)于一匹馬來(lái)說(shuō),這真是令人驚嘆。
【14】Unfortunately, Hans wasn't doing math, but what he was doing was equally impressive.
不幸的是,漢斯并不是在做數(shù)學(xué)題, 但它學(xué)會(huì)的事情也很了不起。
【15】Hans had learned to watch the people in the room to tell when he should tap his hoof.
漢斯學(xué)會(huì)了觀察房間里的人, 來(lái)判斷它該在什么時(shí)候敲蹄子。
【16】So he communicated his answers by tapping his hoof.
它通過(guò)敲蹄子來(lái)“說(shuō)出”它的答案。
【17】It turns out that if you know the answer to "if the eighth day of the month falls on a Tuesday, what's the date of the following Friday,"
真實(shí)的情況是,如果你知道答案, 就是“如果每個(gè)月的第八天是星期二, 那么下一個(gè)星期五是什么日子”的答案,
【18】you will subconsciously change your posture once the horse has given the correct 18 taps.
你會(huì)在漢斯正確地敲打 18 下的時(shí)候 下意識(shí)地改變你的姿勢(shì)。
【19】So Hans couldn't do math, but he had learned to watch the people in the room who could do math, which, I mean, still pretty impressive for a horse.
所以漢斯不會(huì)做數(shù)學(xué), 但它學(xué)會(huì)了觀察房間里會(huì)做數(shù)學(xué)的人, 這對(duì)一匹馬來(lái)說(shuō)還是很了不起的。
【20】But this is an old picture, and we would not fall for Clever Hans today.
但這只是一個(gè)古老的故事了, 今天的我們已經(jīng)不會(huì) 再被聰明的漢斯騙到了。
【21】Or would we?
會(huì)嗎?
【22】Well, I work in AI, and let me tell you, things are wild.
我在人工智能領(lǐng)域工作, 我可以告訴你,事情很瘋狂。
【23】There have been multiple examples of people being completely convinced that AI understands them.
有許多例子表明,人們完全相信 人工智能能理解他們。
【24】In 2022, a Google engineer thought that Google's AI was sentient.
2022 年, 一位谷歌工程師認(rèn)為 谷歌的人工智能有自我意識(shí)。
【25】And you may have had a really human-like conversation with something like ChatGPT.
你也可能試過(guò)與ChatGPT 進(jìn)行類(lèi)似人類(lèi)的對(duì)話。
【26】But models we're training today are so much better than the models we had even five years ago.
我們今天訓(xùn)練的模型比我們 僅五年前的模型都要好得多。
【27】It really is remarkable.
這真的很了不起。
【28】So at this super crazy moment in time, let's ask the super crazy question: Does AI understand us, or are we having our own Clever Hans moment?
所以在這個(gè)瘋狂的時(shí)刻, 讓我們問(wèn)一個(gè)瘋狂的問(wèn)題: 人工智能是真的理解我們, 還是我們又遇到了一匹聰明的漢斯?
【29】Some philosophers think that computers will never understand language.
一些哲學(xué)家認(rèn)為, 計(jì)算機(jī)永遠(yuǎn)不會(huì)理解語(yǔ)言。
【30】To illustrate this, they developed something they call the Chinese room argument.
為了說(shuō)明這一點(diǎn),他們?cè)O(shè)計(jì)了一段 被稱(chēng)為“中文房間”的論證。
【31】In the Chinese room, there is a person, hypothetical person, who does not understand Chinese, but he has along with him a set of instructions that tell him how to respond in Chinese to any Chinese sentence.
在中文房間里, 有一個(gè)人,虛構(gòu)的人, 他/她不懂中文, 但他/她眼前有一套指令, 告訴他/她如何用中文 回應(yīng)任何中文語(yǔ)句。
【32】Here's how the Chinese room works.
中文房間中會(huì)進(jìn)行以下流程。
【33】A piece of paper comes in through a slot in the door, has something written in Chinese on it.
門(mén)上的縫隙里遞來(lái)了一張紙條, 上面寫(xiě)著一些中文。
【34】The person uses their instructions to figure out how to respond.
這個(gè)人要用面前的指示想出回應(yīng)的內(nèi)容。
【35】They write the response down on a piece of paper and then send it back out through the door.
他/她把回答寫(xiě)在一張紙上, 再通過(guò)門(mén)上的縫隙傳出去。
【36】To somebody who speaks Chinese, standing outside this room, it might seem like the person inside the room speaks Chinese.
對(duì)于說(shuō)中文的人來(lái)說(shuō), 站在這個(gè)房間外面, 可能會(huì)覺(jué)得房間里的人會(huì)說(shuō)中文。
【37】But we know they do not, because no knowledge of Chinese is required to follow the instructions.
但我們知道他/她不會(huì), 因?yàn)樽裱甘静恍枰獙W(xué)會(huì)中文。
【38】Performance on this task does not show that you know Chinese.
這項(xiàng)任務(wù)的表現(xiàn)并不能說(shuō)明你懂中文。
【39】So what does that tell us about AI?
這個(gè)論證和AI有什么關(guān)系?
【40】Well, when you and I stand outside of the room, when we speak to one of these AIs like ChatGPT, we are the person standing outside the room.
當(dāng)你我站在房間外面, 與像ChatGPT這樣的AI說(shuō)話時(shí), 我們就是站在房間外面的人。
【41】We're feeding in English sentences, we're getting English sentences back.
我們輸入的是英語(yǔ)句子, 得到的是英語(yǔ)句子的反饋。
【42】It really looks like the models understand us.
看起來(lái)這些模型真的能理解我們。
【43】It really looks like they know English.
看起來(lái)它們真的像懂英語(yǔ)。
【44】But under the hood, these models are just following a set of instructions, albeit complex.
但在系統(tǒng)的底層, 這些模型只是遵循一套指令, 盡管是很復(fù)雜的指令。
【45】How do we know if AI understands us?
我們?nèi)绾沃繟I能否理解我們?
【46】To answer that question, let's go back to the Chinese room again.
為了回答這個(gè)問(wèn)題, 讓我們?cè)倩氐健爸形姆块g”的例子。
【47】Let's say we have two Chinese rooms.
假設(shè)我們有兩個(gè)中文房間。
【48】In one Chinese room is somebody who actually speaks Chinese, and in the other room is our impostor.
其中一個(gè)房間里 是真正會(huì)說(shuō)中文的人, 而另一個(gè)房間里的是個(gè)冒牌貨。
【49】When the person who actually speaks Chinese gets a piece of paper that says something in Chinese in it, they can read it, no problem.
當(dāng)真正講中文的人拿到一張 寫(xiě)有中文的紙時(shí), 他/她當(dāng)然可以讀懂。
【50】But when our imposter gets it again, he has to use his set of instructions to figure out how to respond.
但是,當(dāng)這位冒牌貨拿到紙條時(shí), 他/她必須使用那套指令來(lái)作出回應(yīng)。
【51】From the outside, it might be impossible to distinguish these two rooms, but we know inside something really different is happening.
從外面看,可能無(wú)法區(qū)分這兩個(gè)房間, 但我們知道房間里面 發(fā)生著一些根本不同的事情。
【52】To illustrate that, let's say inside the minds of our two people, inside of our two rooms, is a little scratch pad.
為了說(shuō)明這一點(diǎn), 假設(shè)在兩個(gè)房間里的兩個(gè)人, 兩個(gè)人的頭腦中 各有一個(gè)小草稿本。
【53】And everything they have to remember in order to do this task has to be written on that little scratch pad.
完成這項(xiàng)任務(wù)需要的所有記憶 都必須記在小草稿本上。
【54】If we could see what was written on that scratch pad, we would be able to tell how different their approach to the task is.
如果我們能看到寫(xiě)在草稿本上的東西, 我們就能知道他們 完成任務(wù)的方法有什么不同。
【55】So though the input and the output of these two rooms might be exactly the same, the process of getting from input to output -- completely different.
因此,盡管這兩個(gè)房間的輸入和輸出 可能完全相同, 但從輸入轉(zhuǎn)化為輸出的過(guò)程完全不同。
【56】So again, what does that tell us about AI?
這跟人工智能又有什么關(guān)系呢?
【57】Again, if AI, even if it generates completely plausible dialogue, answers questions just like we would expect, it may still be an imposter of sorts.
即使人工智能產(chǎn)生了完全合理的對(duì)話, 像我們期望的那樣回答問(wèn)題, 它也仍然可能是某種程度上的冒牌貨。
【58】If we want to know if AI understands language like we do, we need to know what it's doing.
如果我們想知道人工智能 能否像我們一樣理解語(yǔ)言, 我們需要知道它在做什么。
【59】We need to get inside to see what it's doing.
我們需要深入內(nèi)部,看看它在做什么。
【60】Is it an imposter or not?
它到底是不是一個(gè)冒牌貨?
【61】We need to see its scratch pad, and we need to be able to compare it to the scratch pad of somebody who actually understands language.
我們需要看到它的草稿本, 并將其與 真正理解語(yǔ)言的人類(lèi)的草稿本進(jìn)行比較。
【62】But like scratch pads in brains, that's not something we can actually see, right?
但是,大腦中的“草稿本” 不是我們能隨便看到的東西,對(duì)吧?
【63】Well, it turns out that we can kind of see scratch pads in brains.
實(shí)際上,我們可以在一定程度上 “看到”大腦中的草稿。
【64】Using something like fMRI or EEG, we can take what are like little snapshots of the brain while it's reading.
用fMRI或EEG這樣的技術(shù), 我們可以在人閱讀時(shí)拍下大腦的快照。
【65】So have people read words or stories and then take pictures of their brain.
在人們?cè)谧x單詞或讀故事的時(shí)候, 拍攝他們大腦的狀態(tài)。
【66】And those brain images are like fuzzy, out-of-focus pictures of the scratch pad of the brain.
這些腦成像的圖片就像是 模糊、失焦的草稿本照片,
【67】They tell us a little bit about how the brain is processing and representing information while you read.
它們能告訴我們一些信息, 閱讀時(shí)的大腦是如何處理、表現(xiàn)信息的。
【68】So here are three brain images taken while a person read the word "apartment,"
這里有三張大腦圖像,對(duì)應(yīng)一個(gè)人 讀到三個(gè)詞時(shí)的情況:“公寓”、
【69】'"house" and "celery."
“房子”和“芹菜”。
【70】You can see just with your naked eye that the brain image for "apartment" and "house"
你一眼就能看出, “公寓”和“房子”的腦圖像
【71】are more similar to each other than they are to the brain image for "celery."
比“芹菜”的腦圖像 更為相似。
【72】And you know, of course that apartments and houses are more similar than they are to celery, just the words.
當(dāng)然,公寓和房子就詞意來(lái)說(shuō) 本來(lái)就更相似, 相比于芹菜來(lái)說(shuō)。
【73】So said another way, the brain uses its scratchpad when reading the words "apartment" and "house"
換一種說(shuō)法, 大腦在讀到“公寓”和“房子”兩個(gè)詞時(shí) 在草稿本上記錄下的內(nèi)容
【74】in a way that's more similar than when you read the word "celery."
比讀到“芹菜”時(shí)的草稿本更相似。
【75】The scratch pad tells us a little bit about how the brain represents the language.
這些草稿本向我們透露了一些 大腦表示語(yǔ)言的方法。
【76】It's not a perfect picture of what the brain's doing, but it's good enough.
這種方法并不能完美展現(xiàn) 大腦中發(fā)生的情況, 但是足夠用了。
【77】OK, so we have scratch pads for the brain.
現(xiàn)在我們有了大腦的草稿紙,
【78】Now we need a scratch pad for AI.
我們需要拿到AI的草稿紙。
【79】So inside a lot of AIs is a neural network.
許多AI的內(nèi)部是一個(gè)神經(jīng)網(wǎng)絡(luò),
【80】And inside of a neural network is a bunch of these little neurons.
而神經(jīng)網(wǎng)絡(luò)是由 一個(gè)個(gè)小的神經(jīng)元組成的。
【81】So here the neurons are like these little gray circles.
神經(jīng)元就像這些灰色的小圓圈。
【82】And we would like to know what is the scratch pad of a neural network?
我們想要知道 神經(jīng)網(wǎng)絡(luò)的草稿紙長(zhǎng)什么樣?
【83】Well, when we feed in a word into a neural network, each of the little neurons computes a number.
當(dāng)我們給神經(jīng)網(wǎng)絡(luò)輸入一個(gè)詞時(shí), 每個(gè)小神經(jīng)元都會(huì)計(jì)算一個(gè)數(shù)字。
【84】Those little numbers I'm representing here with colors.
我用顏色來(lái)表示這些數(shù)字。
【85】So every neuron computes this little number, and those numbers tell us something about how the neural network is processing language.
每個(gè)神經(jīng)元會(huì)計(jì)算一個(gè)數(shù)字, 這些數(shù)字給我們展現(xiàn)了 神經(jīng)網(wǎng)絡(luò)是如何處理語(yǔ)言的。
【86】Taken together, all of those little circles paint us a picture of how the neural network is representing language, and they give us the scratch pad of the neural network.
總結(jié)起來(lái), 所有這些小圓圈為我們描述了 神經(jīng)網(wǎng)絡(luò)表示語(yǔ)言的方法, 展現(xiàn)了神經(jīng)網(wǎng)絡(luò)的草稿本。
【87】OK, great.
太好了。
【88】Now we have two scratch pads, one from the brain and one from AI.
現(xiàn)在我們有了兩張草稿紙, 一張來(lái)自大腦,一張來(lái)自人工智能。
【89】And we want to know: Is AI doing something like what the brain is doing?
我們想知道的是: AI做的事情是否和大腦類(lèi)似?
【90】How can we test that?
我們要怎么判斷呢?
【91】Here's what researchers have come up with.
這是研究人員想出的辦法。
【92】We're going to train a new model.
我們要訓(xùn)練一個(gè)新的模型。
【93】That new model is going to look at neural network scratch pad for a particular word and try to predict the brain scratch pad for the same word.
新的模型將檢查神經(jīng)網(wǎng)絡(luò) 對(duì)某個(gè)單詞的“草稿”, 并試圖預(yù)測(cè)同一個(gè)單詞的大腦草稿。
【94】We can do it, by the way, around two.
順便一提,這個(gè)過(guò)程也可以反過(guò)來(lái)。
【95】So let's train a new model.
我們來(lái)訓(xùn)練一個(gè)新的模型,
【96】It's going to look at the neural network scratch pad for a particular word and try to predict the brain scratchpad.
它會(huì)檢查神經(jīng)網(wǎng)絡(luò) 對(duì)特定單詞的草稿, 并預(yù)測(cè)大腦的草稿。
【97】If the brain and AI are doing nothing alike, have nothing in common, we won't be able to do this prediction task.
如果大腦和AI所做的事情 沒(méi)有任何相似之處, 沒(méi)有任何共同之處, 這項(xiàng)預(yù)測(cè)任務(wù)將無(wú)法完成。
【98】It won't be possible to predict one from the other.
兩者中的任何一個(gè)都無(wú)法預(yù)測(cè)另一個(gè)。
【99】So we've reached a fork in the road and you can probably tell I'm about to tell you one of two things.
現(xiàn)在我們到了一個(gè)岔路口, 答案只會(huì)是以下兩者之一:
【100】I'm going to tell you AI is amazing, or I'm going to tell you AI is an imposter.
要么AI是非常驚人的; 要么AI只是一個(gè)冒牌貨。
【101】Researchers like me love to remind you that AI is nothing like the brain.
像我這樣的研究人員 特別喜歡說(shuō), 人工智能與大腦完全不同。
【102】And that is true.
這是事實(shí)。
【103】But could it also be the AI and the brain share something in common?
但AI和大腦有沒(méi)有相似點(diǎn)呢?
【104】So we've done this scratch pad prediction task, and it turns out, 75 percent of the time the predicted neural network scratchpad for a particular word is more similar to the true neural network scratchpad for that word than it is to the neural network scratch pad for some other randomly chosen word - 75 percent is much better than chance.
我們進(jìn)行了這項(xiàng)預(yù)測(cè)草稿的任務(wù), 結(jié)果發(fā)現(xiàn),有 75% 的概率 針對(duì)某一特定詞語(yǔ)的 神經(jīng)網(wǎng)絡(luò)草稿的預(yù)測(cè)結(jié)果 會(huì)更類(lèi)似針對(duì)這一詞語(yǔ)的 真實(shí)大腦神經(jīng)網(wǎng)絡(luò)草稿, 而不是更接近于 針對(duì)其他隨機(jī)詞語(yǔ)的 大腦神經(jīng)網(wǎng)絡(luò)草稿。 75% 要遠(yuǎn)高于隨機(jī)水平。
【105】What about for more complicated things, not just words, but sentences, even stories?
那么對(duì)于更復(fù)雜的事物, 不只是單詞, 還有句子,甚至故事呢?
【106】Again, this scratch pad prediction task works.
這個(gè)草稿預(yù)測(cè)任務(wù)得到了同樣的結(jié)果。
【107】We're able to predict the neural network scratch pad from the brain and vice versa.
我們可以從大腦圖像預(yù)測(cè)神經(jīng)網(wǎng)絡(luò), 反過(guò)來(lái)也可以。
【108】Amazing.
太有意思了。
【109】So does that mean that neural networks and AI understand language just like we do?
那么,這是否意味著 神經(jīng)網(wǎng)絡(luò)和人工智能可以 像我們?nèi)祟?lèi)一樣理解語(yǔ)言呢?
【110】Well, truthfully, no.
說(shuō)實(shí)話,并不是。
【111】Though these scratch pad prediction tasks show above-chance accuracy, the underlying correlations are still pretty weak.
盡管這些草稿預(yù)測(cè)任務(wù) 表現(xiàn)出高于隨機(jī)的準(zhǔn)確率, 兩者底層的相關(guān)性仍然非常弱。
【112】And though neural networks are inspired by the brain, they don't have the same kind of structure and complexity that we see in the brain.
盡管神經(jīng)網(wǎng)絡(luò)的靈感來(lái)自于大腦, 它們并不具備大腦呈現(xiàn)的 結(jié)構(gòu)和復(fù)雜性。
【113】Neural networks also don't exist in the world.
神經(jīng)網(wǎng)絡(luò)也不存在于真實(shí)世界中。
【114】A neural network has never opened a door or seen a sunset, heard a baby cry.
從來(lái)沒(méi)有一個(gè)神經(jīng)網(wǎng)絡(luò)打開(kāi)過(guò)門(mén), 看到過(guò)日落,聽(tīng)到過(guò)嬰兒的哭聲。
【115】Can a neural network that doesn't actually exist in the world, hasn't really experienced the world, really understand language about the world?
一個(gè)并不真實(shí)存在于世界上、 沒(méi)有真正體驗(yàn)過(guò)世界的神經(jīng)網(wǎng)絡(luò), 能真正理解描述世界的語(yǔ)言嗎?
【116】Still, these scratch pad prediction experiments have held up - multiple brain imaging experiments, multiple neural networks.
盡管如此,這些草稿預(yù)測(cè)實(shí)驗(yàn) 仍然站得住腳 多個(gè)大腦成像結(jié)果, 多個(gè)神經(jīng)網(wǎng)絡(luò)模型。
【117】We've also found that as the neural networks get more accurate, they also start to use their scratch pad in a way that becomes more brain-like.
我們還發(fā)現(xiàn), 隨著神經(jīng)網(wǎng)絡(luò)變得更加準(zhǔn)確, 它們也以一種更像大腦的方式 使用著草稿紙。
【118】And it's not just language.
這不僅僅是語(yǔ)言方面。
【119】We've seen similar results in navigation and vision.
我們?cè)趯?dǎo)航任務(wù)和視覺(jué)任務(wù)上 也看到了相似的結(jié)果。
【120】So AI is not doing exactly what the brain is doing, but it's not completely random either.
人工智能所做的并不完全和大腦相同, 但也不是完全隨機(jī)。
【121】So from where I sit, if we want to know if AI really understands language like we do, we need to get inside of the Chinese room.
從我的角度看來(lái), 如果我們想真正知道 AI能否像我們這樣理解語(yǔ)言, 我們需要進(jìn)入那個(gè)“中文房間”,
【122】We need to know what the AI is doing, and we need to be able to compare that to what people are doing when they understand language.
需要知道AI到底在做什么, 需要能將AI的行為與人類(lèi) 理解語(yǔ)言的行為比較。
【123】AI is moving so fast.
人工智能發(fā)展得太快了。
【124】Today, I'm asking you, does AI understand language that might seem like a silly question in ten years.
今天我還在問(wèn)大家, 人工智能能否理解語(yǔ)言, 可能十年以后, 這個(gè)問(wèn)題就會(huì)看起來(lái)很“傻”。
【125】Or ten months.
也可能十個(gè)月。
【126】But one thing will remain true.
但有一件事不會(huì)變化。
【127】We are meaning-making humans, and we are going to continue to look for meaning and interpret the world around us.
我們是創(chuàng)造意義的人類(lèi)。 我們將繼續(xù)尋找意義, 解釋我們周?chē)氖澜纭?/p>
【128】And we will need to remember that if we only look at the input and output of AI, it's very easy to be fooled.
我們需要記住, 如果我們只看AI的輸入和輸出, 我們就容易被騙到。
【129】We need to get inside of the metaphorical room of AI in order to see what's happening.
我們需要真正深入 人工智能里的那個(gè)“房間”, 看到真正在發(fā)生的事情。
【130】It's what's inside the counts.
房間里發(fā)生了什么才是最重要的。
【131】Thank you.
謝謝。