【TED演講稿】ChatGPT驚人潛力的內(nèi)幕【英語演講文字稿】
TED演講者:Greg Brockman / 格雷格·布羅克曼
演講標(biāo)題:The inside story of ChatGPT's astonishing potential / ChatGPT驚人潛力的內(nèi)幕
內(nèi)容概要:In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
在一次來自技術(shù)前沿的演講中,OpenAI聯(lián)合創(chuàng)始人Greg Brockman探討了ChatGPT的基本設(shè)計(jì)原理,并演示了一些令人震驚的、未發(fā)布的聊天機(jī)器人插件,這些插件在世界各地掀起了軒然大波。演講結(jié)束后,TED負(fù)責(zé)人Chris Anderson與Brockman一起深入探討了ChatGPT的發(fā)展時(shí)間表,并讓Brockman承擔(dān)科技行業(yè)內(nèi)外許多人提出的向世界發(fā)布如此強(qiáng)大的工具的風(fēng)險(xiǎn)。
*************************************************************************
【1】We started OpenAI seven years ago because we felt like something really interesting was happening in AI and we wanted to help steer it in a positive direction.
我們七年前啟動(dòng)了OpenAI 因?yàn)槲覀冇X得 人工智能中發(fā)生了非常有趣的事情 我們想幫助引導(dǎo)它 朝著積極的方向。
【2】It's honestly just really amazing to see how far this whole field has come since then.
真的很神奇 整個(gè)場(chǎng)地有多遠(yuǎn) 從那時(shí)起就來了。
【3】And it's really gratifying to hear from people like Raymond who are using the technology we are building, and others, for so many wonderful things.
聽到這個(gè)消息真的很高興 來自Raymond這樣的人 誰在使用這項(xiàng)技術(shù) 我們正在建設(shè),而其他人, 為了這么多美妙的事情。
【4】We hear from people who are excited, we hear from people who are concerned, we hear from people who feel both those emotions at once.
我們聽到一些人很興奮, 我們從關(guān)心此事的人那里聽到, 我們從那些有這種感覺的人那里聽到 這兩種情緒同時(shí)出現(xiàn)。
【5】And honestly, that's how we feel.
老實(shí)說,這就是我們的感受。
【6】Above all, it feels like we're entering an historic period right now where we as a world are going to define a technology that will be so important for our society going forward.
最重要的是,感覺我們正在進(jìn)入 現(xiàn)在的歷史時(shí)期 我們作為一個(gè)世界 將定義一項(xiàng)技術(shù) 這將非常重要 為了我們的社會(huì)向前發(fā)展。
【7】And I believe that we can manage this for good.
我相信我們可以 永遠(yuǎn)管理好這件事。
【8】So today, I want to show you the current state of that technology and some of the underlying design principles that we hold dear.
所以今天,我想給你看看 該技術(shù)的現(xiàn)狀 以及一些潛在的 我們珍視的設(shè)計(jì)原則。
【9】So the first thing I'm going to show you is what it's like to build a tool for an AI rather than building it for a human.
所以我要給你看的第一件事 就是建造的感覺 人工智能的工具 而不是為人類建造。
【10】So we have a new DALL-E model, which generates images, and we are exposing it as an app for ChatGPT to use on your behalf.
所以我們有了一個(gè)新的DALL-E模型, 其生成圖像, 我們將其作為應(yīng)用程序進(jìn)行公開 供ChatGPT代表您使用。
【11】And you can do things like ask, you know, suggest a nice post-TED meal and draw a picture of it.
你可以做一些事情,比如問,你知道, 建議在TED演講后吃一頓大餐 然后畫一幅畫。
【12】(Laughter) Now you get all of the, sort of, ideation and creative back-and-forth and taking care of the details for you that you get out of ChatGPT.
(眾笑) 現(xiàn)在你得到了所有的,有點(diǎn), 思維和創(chuàng)造性的來回 并為您處理細(xì)節(jié) 你離開了ChatGPT。
【13】And here we go, it's not just the idea for the meal, but a very, very detailed spread.
我們開始了,這不僅僅是 這頓飯的創(chuàng)意, 但是非常非常詳細(xì)的傳播。
【14】So let's see what we're going to get.
所以讓我們看看我們會(huì)得到什么。
【15】But ChatGPT doesn't just generate images in this case -- sorry, it doesn't generate text, it also generates an image.
但ChatGPT不僅僅生成 本例中的圖像-- 對(duì)不起,它不會(huì)生成文本, 它還生成圖像。
【16】And that is something that really expands the power of what it can do on your behalf in terms of carrying out your intent.
這很重要 這真的擴(kuò)大了力量 它能為你做些什么 在實(shí)現(xiàn)你的意圖方面。
【17】And I'll point out, this is all a live demo.
我要指出, 這都是現(xiàn)場(chǎng)演示。
【18】This is all generated by the AI as we speak.
這都是生成的 當(dāng)我們說話的時(shí)候,人工智能。
【19】So I actually don't even know what we're going to see.
所以我其實(shí)都不知道 我們將要看到的。
【20】This looks wonderful.
這看起來棒極了。
【21】(Applause) I'm getting hungry just looking at it.
(掌聲) 我一看就餓了。
【22】Now we've extended ChatGPT with other tools too, for example, memory.
現(xiàn)在我們已經(jīng)擴(kuò)展了ChatGPT 使用其他工具, 例如存儲(chǔ)器。
【23】You can say "save this for later."
你可以說“留著待會(huì)兒用?!?/p>
【24】And the interesting thing about these tools is they're very inspectable.
有趣的是 關(guān)于這些工具 它們是非??蓹z查的。
【25】So you get this little pop up here that says "use the DALL-E app."
所以你在這里看到了這個(gè)小彈窗 上面寫著“使用DALL-E應(yīng)用程序?!?/p>
【26】And by the way, this is coming to you, all ChatGPT users, over upcoming months.
順便說一句,這一切都向你襲來 ChatGPT用戶,未來幾個(gè)月。
【27】And you can look under the hood and see that what it actually did was write a prompt just like a human could.
你可以看看引擎蓋下面 看看它到底做了什么 正在編寫提示 就像人類一樣。
【28】And so you sort of have this ability to inspect how the machine is using these tools, which allows us to provide feedback to them.
所以你有點(diǎn) 這種檢查能力 機(jī)器如何使用這些工具, 這使我們能夠提供 向他們提供反饋。
【29】Now it's saved for later, and let me show you what it's like to use that information and to integrate with other applications too.
現(xiàn)在它被保存以備將來使用, 讓我給你看看 使用這些信息是什么感覺 以及整合 以及其他應(yīng)用程序。
【30】You can say, “Now make a shopping list for the tasty thing I was suggesting earlier.” And make it a little tricky for the AI.
你可以說, ??現(xiàn)在列一份購物清單 為了美味的東西 我之前是在建議。?? 讓人工智能變得有點(diǎn)棘手。
【31】'"And tweet it out for all the TED viewers out there."
“并在推特上發(fā)布 TED觀眾。
【32】(Laughter) So if you do make this wonderful, wonderful meal, I definitely want to know how it tastes.
(眾笑) 所以,如果你真的做到了這一點(diǎn), 美餐, 我當(dāng)然想知道它的味道。
【33】But you can see that ChatGPT is selecting all these different tools without me having to tell it explicitly which ones to use in any situation.
但你可以看到ChatGPT 正在選擇所有這些不同的工具 而不需要我明確地告訴它 在任何情況下都可以使用。
【34】And this, I think, shows a new way of thinking about the user interface.
我認(rèn)為,這顯示了一種新的方式 思考用戶界面。
【35】Like, we are so used to thinking of, well, we have these apps, we click between them, we copy/paste between them, and usually it's a great experience within an app as long as you kind of know the menus and know all the options.
就像,我們已經(jīng)習(xí)慣了思考, 好吧,我們有這些應(yīng)用程序, 我們?cè)谒鼈冎g點(diǎn)擊, 我們?cè)谒鼈冎g復(fù)制/粘貼, 通常這是一個(gè)很棒的 應(yīng)用程序內(nèi)的體驗(yàn) 只要你知道 菜單,并了解所有選項(xiàng)。
【36】Yes, I would like you to.
是的,我希望你能。
【37】Yes, please.
是的,請(qǐng)。
【38】Always good to be polite.
禮貌總是好的。
【39】(Laughter) And by having this unified language interface on top of tools, the AI is able to sort of take away all those details from you.
(眾笑) 通過統(tǒng)一 工具之上的語言界面, 人工智能能夠帶走 所有這些細(xì)節(jié)都來自你。
【40】So you don't have to be the one who spells out every single sort of little piece of what's supposed to happen.
所以你不必是那個(gè) 他把每一個(gè)字都拼出來 有點(diǎn)小 應(yīng)該發(fā)生的事情。
【41】And as I said, this is a live demo, so sometimes the unexpected will happen to us.
正如我所說,這是一個(gè)現(xiàn)場(chǎng)演示, 所以有時(shí)意想不到 會(huì)發(fā)生在我們身上。
【42】But let's take a look at the Instacart shopping list while we're at it.
但讓我們看看Instacart 購物清單。
【43】And you can see we sent a list of ingredients to Instacart.
你可以看到我們發(fā)送了一個(gè)列表 Instacart上的配料。
【44】Here's everything you need.
這是你需要的一切。
【45】And the thing that's really interesting is that the traditional UI is still very valuable, right?
還有一件非常有趣的事 是傳統(tǒng)的用戶界面嗎 仍然很有價(jià)值,對(duì)吧?
【46】If you look at this, you still can click through it and sort of modify the actual quantities.
如果你看看這個(gè), 你仍然可以點(diǎn)擊它 并在某種程度上修改實(shí)際數(shù)量。
【47】And that's something that I think shows that they're not going away, traditional UIs.
這就是我認(rèn)為的 他們不會(huì)離開, 傳統(tǒng)UI。
【48】It's just we have a new, augmented way to build them.
只是我們有了一個(gè)新的, 以增強(qiáng)的方式構(gòu)建它們。
【49】And now we have a tweet that's been drafted for our review, which is also a very important thing.
現(xiàn)在我們有一條推特 這是為我們的審查起草的, 這也是一件非常重要的事情。
【50】We can click “run,” and there we are, we’re the manager, we’re able to inspect, we're able to change the work of the AI if we want to.
我們可以點(diǎn)擊??跑?? 我們就在這里, 我們??你是經(jīng)理,我們??能夠進(jìn)行檢查, 我們可以改變工作 如果我們?cè)敢獾脑挕?/p>
【51】And so after this talk, you will be able to access this yourself.
所以在這次談話之后, 你可以自己訪問這個(gè)。
【52】And there we go.
我們開始了。
【53】Cool.
涼的
【54】Thank you, everyone.
謝謝大家。
【55】(Applause) So we’ll cut back to the slides.
(掌聲) 所以我們??I’我把鏡頭切回幻燈片。
【56】Now, the important thing about how we build this, it's not just about building these tools.
現(xiàn)在,重要的是 關(guān)于我們是如何構(gòu)建的, 這不僅僅是構(gòu)建這些工具。
【57】It's about teaching the AI how to use them.
這是關(guān)于教學(xué)的 人工智能如何使用它們。
【58】Like, what do we even want it to do when we ask these very high-level questions?
比如,我們甚至希望它做什么 當(dāng)我們問這些 高級(jí)問題?
【59】And to do this, we use an old idea.
為了做到這一點(diǎn),我們使用了一個(gè)古老的想法。
【60】If you go back to Alan Turing's 1950 paper on the Turing test, he says, you'll never program an answer to this.
如果你回到艾倫·圖靈1950年的論文 關(guān)于圖靈測(cè)試,他說, 你永遠(yuǎn)也無法對(duì)此進(jìn)行編程。
【61】Instead, you can learn it.
相反,你可以學(xué)習(xí)它。
【62】You could build a machine, like a human child, and then teach it through feedback.
你可以制造一臺(tái)機(jī)器, 就像人類的孩子一樣, 然后通過反饋進(jìn)行教學(xué)。
【63】Have a human teacher who provides rewards and punishments as it tries things out and does things that are either good or bad.
有一位提供 獎(jiǎng)懲 當(dāng)它嘗試和做事情時(shí) 要么是好的,要么是壞的。
【64】And this is exactly how we train ChatGPT.
這正是我們訓(xùn)練ChatGPT的方式。
【65】It's a two-step process.
這是一個(gè)分兩步走的過程。
【66】First, we produce what Turing would have called a child machine through an unsupervised learning process.
首先,我們生產(chǎn)圖靈 會(huì)叫一臺(tái)兒童機(jī)器 通過無監(jiān)督的學(xué)習(xí)過程。
【67】We just show it the whole world, the whole internet and say, “Predict what comes next in text you’ve never seen before.”
我們只是向全世界展示, 整個(gè)互聯(lián)網(wǎng) 并說,??預(yù)測(cè)接下來會(huì)發(fā)生什么 在文本中??我以前從未見過。??
【68】And this process imbues it with all sorts of wonderful skills.
這個(gè)過程將其融入其中 擁有各種高超的技術(shù)。
【69】For example, if you're shown a math problem, the only way to actually complete that math problem, to say what comes next, that green nine up there, is to actually solve the math problem.
例如,如果顯示您 一道數(shù)學(xué)題, 唯一的方法 完成那道數(shù)學(xué)題, 要說接下來會(huì)發(fā)生什么, 上面那個(gè)綠色的九號(hào), 就是實(shí)際解決數(shù)學(xué)問題。
【70】But we actually have to do a second step, too, which is to teach the AI what to do with those skills.
但實(shí)際上我們必須這樣做 第二步也是如此, 這是為了教授人工智能 如何利用這些技能。
【71】And for this, we provide feedback.
為此,我們提供反饋。
【72】We have the AI try out multiple things, give us multiple suggestions, and then a human rates them, says “This one’s better than that one.”
我們讓人工智能嘗試多種方法, 給我們多個(gè)建議, 然后一個(gè)人給它們打分,他說 ??這個(gè)??It’比那個(gè)好。??
【73】And this reinforces not just the specific thing that the AI said, but very importantly, the whole process that the AI used to produce that answer.
這不僅強(qiáng)化了 人工智能說的話, 但非常重要的是,整個(gè)過程 人工智能用來產(chǎn)生這個(gè)答案。
【74】And this allows it to generalize.
這使得它可以泛化。
【75】It allows it to teach, to sort of infer your intent and apply it in scenarios that it hasn't seen before, that it hasn't received feedback.
它允許它進(jìn)行教學(xué), 來推斷你的意圖 并將其應(yīng)用于場(chǎng)景 它以前從未見過, 它還沒有收到反饋。
【76】Now, sometimes the things we have to teach the AI are not what you'd expect.
現(xiàn)在,有時(shí)候 我們必須教授人工智能 不是你所期望的。
【77】For example, when we first showed GPT-4 to Khan Academy, they said, "Wow, this is so great, We're going to be able to teach students wonderful things.
例如,當(dāng)我們第一次展示 GPT-4至可汗學(xué)院, 他們說:“哇,這太棒了, 我們將能夠進(jìn)行教學(xué) 學(xué)生們都很棒。
【78】Only one problem, it doesn't double-check students' math.
只有一個(gè)問題,不是 仔細(xì)檢查學(xué)生的數(shù)學(xué)。
【79】If there's some bad math in there, it will happily pretend that one plus one equals three and run with it."
如果里面有數(shù)學(xué)不好的地方, 它會(huì)很高興地假裝一加一 等于三,然后用它跑。
【80】So we had to collect some feedback data.
所以我們不得不收集一些反饋數(shù)據(jù)。
【81】Sal Khan himself was very kind and offered 20 hours of his own time to provide feedback to the machine alongside our team.
薩爾·汗本人非常友善 并提供自己20小時(shí)的時(shí)間 向機(jī)器提供反饋 與我們的團(tuán)隊(duì)并肩作戰(zhàn)。
【82】And over the course of a couple of months we were able to teach the AI that, "Hey, you really should push back on humans in this specific kind of scenario."
在幾個(gè)月的時(shí)間里 我們能夠教導(dǎo)人工智能, “嘿,你真的應(yīng)該 反擊人類 在這種特定的情況下。
【83】And we've actually made lots and lots of improvements to the models this way.
事實(shí)上,我們已經(jīng)賺了很多 以這種方式對(duì)模型進(jìn)行改進(jìn)。
【84】And when you push that thumbs down in ChatGPT, that actually is kind of like sending up a bat signal to our team to say, “Here’s an area of weakness where you should gather feedback.”
當(dāng)你推動(dòng) 在ChatGPT中豎起大拇指, 這實(shí)際上有點(diǎn)像發(fā)送 一個(gè)蝙蝠信號(hào)給我們的團(tuán)隊(duì)說, ??在這里??It’這是一個(gè)薄弱環(huán)節(jié) 你應(yīng)該在哪里收集反饋。??
【85】And so when you do that, that's one way that we really listen to our users and make sure we're building something that's more useful for everyone.
所以當(dāng)你這樣做的時(shí)候, 這是我們真正的 傾聽我們的用戶 確保我們正在建造一些東西 這對(duì)每個(gè)人都更有用。
【86】Now, providing high-quality feedback is a hard thing.
現(xiàn)在,提供高質(zhì)量 反饋是一件很難的事情。
【87】If you think about asking a kid to clean their room, if all you're doing is inspecting the floor, you don't know if you're just teaching them to stuff all the toys in the closet.
如果你想問一個(gè)孩子 為了清潔他們的房間, 如果你所做的一切 正在檢查地板, 你不知道自己是否只是在教書 他們把所有的玩具都塞進(jìn)壁櫥里。
【88】This is a nice DALL-E-generated image, by the way.
這是一個(gè)很好的DALL-E-generated 圖像,順便說一句。
【89】And the same sort of reasoning applies to AI.
同樣的種類 推理適用于人工智能。
【90】As we move to harder tasks, we will have to scale our ability to provide high-quality feedback.
當(dāng)我們進(jìn)入更艱巨的任務(wù)時(shí), 我們必須擴(kuò)大我們的能力 以提供高質(zhì)量的反饋。
【91】But for this, the AI itself is happy to help.
但為此,人工智能本身 很樂意提供幫助。
【92】It's happy to help us provide even better feedback and to scale our ability to supervise the machine as time goes on.
很高興能幫助我們提供 甚至更好的反饋 并擴(kuò)大我們的監(jiān)督能力 機(jī)器隨著時(shí)間的推移。
【93】And let me show you what I mean.
讓我告訴你我的意思。
【94】For example, you can ask GPT-4 a question like this, of how much time passed between these two foundational blogs on unsupervised learning and learning from human feedback.
例如,您可以詢問GPT-4 像這樣的問題, 多少時(shí)間過去了 在這兩個(gè)基礎(chǔ)博客之間 論無監(jiān)督學(xué)習(xí) 以及從人類反饋中學(xué)習(xí)。
【95】And the model says two months passed.
模型顯示兩個(gè)月過去了。
【96】But is it true?
但這是真的嗎?
【97】Like, these models are not 100-percent reliable, although they’re getting better every time we provide some feedback.
比如,這些型號(hào) 不是100%可靠的, 盡管他們??你越來越好了 每次我們提供一些反饋。
【98】But we can actually use the AI to fact-check.
但我們實(shí)際上可以使用 人工智能進(jìn)行事實(shí)核查。
【99】And it can actually check its own work.
它實(shí)際上可以檢查自己的工作。
【100】You can say, fact-check this for me.
你可以說,為我核實(shí)一下事實(shí)。
【101】Now, in this case, I've actually given the AI a new tool.
現(xiàn)在,在這種情況下,我實(shí)際上 給人工智能一個(gè)新的工具。
【102】This one is a browsing tool where the model can issue search queries and click into web pages.
這是一個(gè)瀏覽工具 模型可以在其中發(fā)布搜索查詢 然后點(diǎn)擊進(jìn)入網(wǎng)頁。
【103】And it actually writes out its whole chain of thought as it does it.
它實(shí)際上寫出來了 它的整個(gè)思想鏈。
【104】It says, I’m just going to search for this and it actually does the search.
上面寫著,我??I’我只是想找這個(gè) 它實(shí)際上是在進(jìn)行搜索。
【105】It then it finds the publication date and the search results.
然后它會(huì)找到發(fā)布日期 以及搜索結(jié)果。
【106】It then is issuing another search query.
然后它發(fā)出另一個(gè)搜索查詢。
【107】It's going to click into the blog post.
它將點(diǎn)擊進(jìn)入博客文章。
【108】And all of this you could do, but it’s a very tedious task.
所有這些你都可以做到, 但是??It’這是一項(xiàng)非常乏味的任務(wù)。
【109】It's not a thing that humans really want to do.
這不是一件事 人類真正想做的事。
【110】It's much more fun to be in the driver's seat, to be in this manager's position where you can, if you want, triple-check the work.
這更有趣 坐在駕駛座上, 擔(dān)任這個(gè)經(jīng)理的職位 如果你愿意,你可以在哪里, 對(duì)工作進(jìn)行三次檢查。
【111】And out come citations so you can actually go and very easily verify any piece of this whole chain of reasoning.
隨之而來的是引文 這樣你就可以去了 并且非常容易地驗(yàn)證任何一件 在整個(gè)推理鏈中。
【112】And it actually turns out two months was wrong.
事實(shí)證明 兩個(gè)月是錯(cuò)誤的。
【113】Two months and one week, that was correct.
兩個(gè)月零一周, 這是正確的。
【114】(Applause) And we'll cut back to the side.
(掌聲) 我們會(huì)切到一邊。
【115】And so thing that's so interesting to me about this whole process is that it’s this many-step collaboration between a human and an AI.
所以我很感興趣的事情 關(guān)于整個(gè)過程 是這樣嗎??這是多步驟合作嗎 在人類和人工智能之間。
【116】Because a human, using this fact-checking tool is doing it in order to produce data for another AI to become more useful to a human.
因?yàn)橐粋€(gè)人,使用 這個(gè)事實(shí)核查工具 是為了產(chǎn)生數(shù)據(jù) 讓另一個(gè)人工智能成為 對(duì)人類更有用。
【117】And I think this really shows the shape of something that we should expect to be much more common in the future, where we have humans and machines kind of very carefully
我認(rèn)為這確實(shí)表明了 某物的形狀 我們應(yīng)該期待的 在未來更加普遍, 我們有人類的地方 和機(jī)器
【118】and delicately designed in how they fit into a problem and how we want to solve that problem.
設(shè)計(jì)精致 在他們?nèi)绾稳谌雴栴}中 以及我們想要的方式 來解決這個(gè)問題。
【119】We make sure that the humans are providing the management, the oversight, the feedback, and the machines are operating in a way that's inspectable and trustworthy.
我們確保人類提供 管理、監(jiān)督, 反饋, 機(jī)器正在運(yùn)行 以一種可檢查的方式 值得信賴。
【120】And together we're able to actually create even more trustworthy machines.
我們能夠一起創(chuàng)造 甚至更值得信賴的機(jī)器。
【121】And I think that over time, if we get this process right, we will be able to solve impossible problems.
我認(rèn)為隨著時(shí)間的推移, 如果我們把這個(gè)過程做好, 我們將能夠解決 不可能的問題。
【122】And to give you a sense of just how impossible I'm talking, I think we're going to be able to rethink almost every aspect of how we interact with computers.
給你一種感覺 我說話是多么的不可能, 我想我們會(huì) 重新思考幾乎所有方面 我們?nèi)绾闻c計(jì)算機(jī)交互。
【123】For example, think about spreadsheets.
例如,想想電子表格。
【124】They've been around in some form since, we'll say, 40 years ago with VisiCalc.
從那以后,他們一直以某種形式存在, 我們會(huì)說,40年前的VisiCalc。
【125】I don't think they've really changed that much in that time.
我不認(rèn)為他們真的 在那段時(shí)間里改變了那么多。
【126】And here is a specific spreadsheet of all the AI papers on the arXiv for the past 30 years.
這是一個(gè)具體的電子表格 所有關(guān)于arXiv的人工智能論文 在過去的30年里。
【127】There's about 167,000 of them.
大約有16.7萬人。
【128】And you can see there the data right here.
你可以在這里看到數(shù)據(jù)。
【129】But let me show you the ChatGPT take on how to analyze a data set like this.
但讓我給你看看ChatGPT的拍攝 關(guān)于如何分析這樣的數(shù)據(jù)集。
【130】So we can give ChatGPT access to yet another tool, this one a Python interpreter, so it’s able to run code, just like a data scientist would.
所以我們可以給ChatGPT 訪問另一個(gè)工具, 這是一個(gè)Python解釋器, 所以它??s能夠運(yùn)行代碼, 就像數(shù)據(jù)科學(xué)家一樣。
【131】And so you can just literally upload a file and ask questions about it.
所以你可以 上傳文件 并就此提出問題。
【132】And very helpfully, you know, it knows the name of the file and it's like, "Oh, this is CSV," comma-separated value file, "I'll parse it for you."
而且非常有用,你知道,它知道 文件的名稱,就像, “哦,這是CSV,” 逗號(hào)分隔的值文件, “我?guī)湍惴治鲆幌?。?/p>
【133】The only information here is the name of the file, the column names like you saw and then the actual data.
這里唯一的信息 是文件的名稱, 你看到的列名 然后是實(shí)際數(shù)據(jù)。
【134】And from that it's able to infer what these columns actually mean.
由此可以推斷 這些列的實(shí)際含義。
【135】Like, that semantic information wasn't in there.
比如,語義信息 不在里面。
【136】It has to sort of, put together its world knowledge of knowing that, “Oh yeah, arXiv is a site that people submit papers and therefore that's what these things are and that these are integer values and so therefore it's a number of authors in the paper,"
必須把它放在一起 它的世界知識(shí)知道, ??哦,是的,arXiv是一個(gè)網(wǎng)站 人們提交論文 因此,這些東西就是這樣 并且這些是整數(shù)值 因此它是一個(gè)數(shù)字 論文作者,”
【137】like all of that, that’s work for a human to do, and the AI is happy to help with it.
就像所有這些一樣??s工作 對(duì)于人類來說, 人工智能很樂意提供幫助。
【138】Now I don't even know what I want to ask.
現(xiàn)在我甚至不知道我想問什么。
【139】So fortunately, you can ask the machine, "Can you make some exploratory graphs?"
所以幸運(yùn)的是,你可以問機(jī)器, “你能做一些探索性的圖表嗎?”
【140】And once again, this is a super high-level instruction with lots of intent behind it.
再一次,這是一個(gè)超高水平的 背后有很多意圖的指令。
【141】But I don't even know what I want.
但我甚至不知道自己想要什么。
【142】And the AI kind of has to infer what I might be interested in.
人工智能必須進(jìn)行推斷 我可能感興趣的東西。
【143】And so it comes up with some good ideas, I think.
于是它出現(xiàn)了 我想有一些好主意。
【144】So a histogram of the number of authors per paper, time series of papers per year, word cloud of the paper titles.
所以這個(gè)數(shù)字的直方圖 每篇論文的作者數(shù)量, 每年論文的時(shí)間序列, 論文標(biāo)題的單詞云。
【145】All of that, I think, will be pretty interesting to see.
所有這些,我認(rèn)為, 將會(huì)非常有趣。
【146】And the great thing is, it can actually do it.
最棒的是, 它實(shí)際上可以做到。
【147】Here we go, a nice bell curve.
我們開始了,一個(gè)漂亮的鐘形曲線。
【148】You see that three is kind of the most common.
你看到了嗎 是最常見的。
【149】It's going to then make this nice plot of the papers per year.
然后它會(huì)制作出一個(gè)很好的情節(jié) 每年的論文數(shù)量。
【150】Something crazy is happening in 2023, though.
有點(diǎn)瘋狂 不過,這將發(fā)生在2023年。
【151】Looks like we were on an exponential and it dropped off the cliff.
看起來我們正處于指數(shù)級(jí) 然后它從懸崖上掉了下來。
【152】What could be going on there?
那里可能發(fā)生了什么?
【153】By the way, all this is Python code, you can inspect.
對(duì)了,所有這些 是Python代碼,您可以檢查。
【154】And then we'll see word cloud.
然后我們將看到單詞cloud。
【155】So you can see all these wonderful things that appear in these titles.
所以你可以看到所有這些美妙的東西 出現(xiàn)在這些標(biāo)題中。
【156】But I'm pretty unhappy about this 2023 thing.
但我很不高興 關(guān)于2023年的事情。
【157】It makes this year look really bad.
這讓今年看起來很糟糕。
【158】Of course, the problem is that the year is not over.
當(dāng)然,問題是 這一年還沒有結(jié)束。
【159】So I'm going to push back on the machine.
所以我要重新啟動(dòng)機(jī)器。
【160】[Waitttt that's not fair!!!
這不公平?。。?/p>
【161】2023 isn't over.
2023年還沒有結(jié)束。
【162】What percentage of papers in 2022 were even posted by April 13?] So April 13 was the cut-off date I believe.
2022年論文占比 甚至在4月13日之前發(fā)布?] 所以4月13日是截止日期 日期我相信。
【163】Can you use that to make a fair projection?
你能用它做嗎 一個(gè)公平的預(yù)測(cè)?
【164】So we'll see, this is the kind of ambitious one.
所以我們拭目以待,這是 那種雄心勃勃的人。
【165】(Laughter) So you know, again, I feel like there was more I wanted out of the machine here.
(眾笑) 所以你知道, 再說一次,我覺得我想要更多 從機(jī)器里出來。
【166】I really wanted it to notice this thing, maybe it's a little bit of an overreach for it to have sort of, inferred magically that this is what I wanted.
我真的很想讓它注意到這件事, 也許有點(diǎn) 對(duì)它的過度反應(yīng) 有點(diǎn),神奇地推斷 這就是我想要的。
【167】But I inject my intent, I provide this additional piece of, you know, guidance.
但我注入了我的意圖, 我提供這個(gè)附加件 你知道,指導(dǎo)。
【168】And under the hood, the AI is just writing code again, so if you want to inspect what it's doing, it's very possible.
在引擎蓋下, AI只是再次編寫代碼, 所以,如果你想檢查它在做什么, 這是很有可能的。
【169】And now, it does the correct projection.
現(xiàn)在,它做了正確的投影。
【170】(Applause) If you noticed, it even updates the title.
(掌聲) 如果你注意到了,它甚至?xí)聵?biāo)題。
【171】I didn't ask for that, but it know what I want.
我沒有要求, 但它知道我想要什么。
【172】Now we'll cut back to the slide again.
現(xiàn)在我們將再次回到幻燈片。
【173】This slide shows a parable of how I think we ...
這張幻燈片展示了一個(gè)寓言 我認(rèn)為我們。。。
【174】A vision of how we may end up using this technology in the future.
我們最終會(huì)如何 在未來使用這項(xiàng)技術(shù)。
【175】A person brought his very sick dog to the vet, and the veterinarian made a bad call to say, “Let’s just wait and see.”
一個(gè)人帶來 他那只病得很重的狗去看獸醫(yī), 獸醫(yī)打了個(gè)糟糕的電話 也就是說,??允許??讓我們拭目以待。??
【176】And the dog would not be here today had he listened.
狗不會(huì) 如果他聽了的話,今天就在這里。
【177】In the meanwhile, he provided the blood test, like, the full medical records, to GPT-4, which said, "I am not a vet, you need to talk to a professional, here are some hypotheses."
與此同時(shí), 他提供了血液測(cè)試, 比如完整的醫(yī)療記錄,到GPT-4, 上面寫著:“我不是獸醫(yī), 你需要和專業(yè)人士談?wù)? 以下是一些假設(shè)。
【178】He brought that information to a second vet who used it to save the dog's life.
他帶來了這些信息 給第二個(gè)獸醫(yī) 誰用它救了狗的命。
【179】Now, these systems, they're not perfect.
現(xiàn)在,這些系統(tǒng)并不完美。
【180】You cannot overly rely on them.
你不能過度依賴他們。
【181】But this story, I think, shows that a human with a medical professional and with ChatGPT as a brainstorming partner was able to achieve an outcome that would not have happened otherwise.
但我認(rèn)為,這個(gè)故事表明 一個(gè)有醫(yī)學(xué)專業(yè)知識(shí)的人 和ChatGPT 作為頭腦風(fēng)暴的合作伙伴 能夠取得成果 否則就不會(huì)發(fā)生這種情況。
【182】I think this is something we should all reflect on, think about as we consider how to integrate these systems into our world.
我覺得這很重要 我們都應(yīng)該反思, 考慮到我們的想法 如何集成這些系統(tǒng) 進(jìn)入我們的世界。
【183】And one thing I believe really deeply, is that getting AI right is going to require participation from everyone.
有一件事我深信不疑, 讓人工智能變得正確嗎 需要每個(gè)人的參與。
【184】And that's for deciding how we want it to slot in, that's for setting the rules of the road, for what an AI will and won't do.
這是為了決定 我們希望它如何進(jìn)入, 這是為了制定道路規(guī)則, 人工智能會(huì)做什么,不會(huì)做什么。
【185】And if there's one thing to take away from this talk, it's that this technology just looks different.
如果有一件事 為了擺脫這場(chǎng)談話, 就是這項(xiàng)技術(shù) 只是看起來不一樣。
【186】Just different from anything people had anticipated.
與任何東西都不一樣 人們?cè)缇皖A(yù)料到了。
【187】And so we all have to become literate.
因此,我們都必須識(shí)字。
【188】And that's, honestly, one of the reasons we released ChatGPT.
老實(shí)說,這是一個(gè) 我們發(fā)布ChatGPT的原因之一。
【189】Together, I believe that we can achieve the OpenAI mission of ensuring that artificial general intelligence benefits all of humanity.
一起,我相信我們可以 實(shí)現(xiàn)OpenAI任務(wù) 確保 一般情報(bào) 造福全人類。
【190】Thank you.
非常感謝。
【191】(Applause)
(掌聲)
【192】(Applause ends) Chris Anderson: Greg.
(掌聲結(jié)束) 克里斯·安德森:格雷格。
【193】Wow.
哇!
【194】I mean ...
我的意思是。。。
【195】I suspect that within every mind out here there's a feeling of reeling.
我懷疑在這里的每個(gè)人心中 有一種搖搖欲墜的感覺。
【196】Like, I suspect that a very large number of people viewing this, you look at that and you think, “Oh my goodness, pretty much every single thing about the way I work, I need to rethink."
比如,我懷疑 觀看該節(jié)目的人數(shù), 你看著它就會(huì)想, ??天哪, 幾乎每一件事 關(guān)于我的工作方式,我需要重新思考。
【197】Like, there's just new possibilities there.
就像,只是 那里有新的可能性。
【198】Am I right?
我說得對(duì)嗎?
【199】Who thinks that they're having to rethink the way that we do things?
誰認(rèn)為他們必須重新思考 我們做事的方式?
【200】Yeah, I mean, it's amazing, but it's also really scary.
是的,我的意思是,這太神奇了, 但這也很可怕。
【201】So let's talk, Greg, let's talk.
讓我們談?wù)?格雷格,讓我們談?wù)劇?/p>
【202】I mean, I guess my first question actually is just how the hell have you done this?
我的意思是,我想 我的第一個(gè)問題實(shí)際上只是 你到底是怎么做到的?
【203】(Laughter) OpenAI has a few hundred employees.
(眾笑) OpenAI有幾百名員工。
【204】Google has thousands of employees working on artificial intelligence.
谷歌有數(shù)千名員工 致力于人工智能。
【205】Why is it you who's come up with this technology that shocked the world?
為什么上來的是你 利用這項(xiàng)技術(shù) 震驚了世界?
【206】Greg Brockman: I mean, the truth is, we're all building on shoulders of giants, right, there's no question.
格雷格·布羅克曼:我的意思是,事實(shí)是, 我們都在肩上建設(shè) 巨人,對(duì)吧,這是毫無疑問的。
【207】If you look at the compute progress, the algorithmic progress, the data progress, all of those are really industry-wide.
如果你看看計(jì)算進(jìn)度, 算法的進(jìn)展, 數(shù)據(jù)進(jìn)度, 所有這些都是全行業(yè)的。
【208】But I think within OpenAI, we made a lot of very deliberate choices from the early days.
但我認(rèn)為在OpenAI中, 我們做了很多非常深思熟慮的事情 早期的選擇。
【209】And the first one was just to confront reality as it lays.
第一個(gè)只是 面對(duì)現(xiàn)實(shí)。
【210】And that we just thought really hard about like: What is it going to take to make progress here?
我們只是想 真的很難,比如: 需要什么 在這里取得進(jìn)展?
【211】We tried a lot of things that didn't work, so you only see the things that did.
我們嘗試了很多不起作用的方法, 所以你只能看到發(fā)生的事情。
【212】And I think that the most important thing has been to get teams of people who are very different from each other to work together harmoniously.
我認(rèn)為最重要的是 一直在爭(zhēng)取團(tuán)隊(duì)成員 彼此非常不同 和諧地合作。
【213】CA: Can we have the water, by the way, just brought here?
CA:我們可以喝水嗎, 順便問一下,剛被帶到這里?
【214】I think we're going to need it, it's a dry-mouth topic.
我認(rèn)為我們需要它, 這是一個(gè)口干舌燥的話題。
【215】But isn't there something also just about the fact that you saw something in these language models that meant that if you continue to invest in them and grow them, that something at some point might emerge?
但不是也有什么嗎 差不多就是事實(shí) 你看到了什么 在這些語言模型中 這意味著如果你繼續(xù) 投資并發(fā)展它們, 那是什么 在某個(gè)時(shí)刻可能會(huì)出現(xiàn)?
【216】GB: Yes.
GB:是的。
【217】And I think that, I mean, honestly, I think the story there is pretty illustrative, right?
我認(rèn)為,老實(shí)說, 我認(rèn)為那里的故事 很能說明問題,對(duì)吧?
【218】I think that high level, deep learning, like we always knew that was what we wanted to be, was a deep learning lab, and exactly how to do it?
我認(rèn)為高水平、深度學(xué)習(xí), 就像我們一直知道的那樣 我們想要成為什么樣的人, 是一個(gè)深度學(xué)習(xí)實(shí)驗(yàn)室, 具體怎么做?
【219】I think that in the early days, we didn't know.
我認(rèn)為在早期, 我們不知道。
【220】We tried a lot of things, and one person was working on training a model to predict the next character in Amazon reviews, and he got a result where -- this is a syntactic process,
我們嘗試了很多事情, 有一個(gè)人在工作 關(guān)于模型的訓(xùn)練 預(yù)測(cè)下一個(gè)角色 在亞馬遜評(píng)論中, 他得到了一個(gè)結(jié)果-- 這是一個(gè)句法過程,
【221】you expect, you know, the model will predict where the commas go, where the nouns and verbs are.
你期待,你知道,模型 將預(yù)測(cè)逗號(hào)的位置, 名詞和動(dòng)詞在哪里。
【222】But he actually got a state-of-the-art sentiment analysis classifier out of it.
但他實(shí)際上擁有最先進(jìn)的技術(shù) 情緒分析分類器。
【223】This model could tell you if a review was positive or negative.
這個(gè)模型可以告訴你 評(píng)論是正面的還是負(fù)面的。
【224】I mean, today we are just like, come on, anyone can do that.
我的意思是,今天我們就像, 拜托,任何人都可以做到。
【225】But this was the first time that you saw this emergence, this sort of semantics that emerged from this underlying syntactic process.
但這是第一次 你看到了這種出現(xiàn), 出現(xiàn)的這種語義 從這個(gè)潛在的句法過程中。
【226】And there we knew, you've got to scale this thing, you've got to see where it goes.
在那里我們知道, 你必須按比例縮放這個(gè)東西, 你得看看它會(huì)去哪里。
【227】CA: So I think this helps explain the riddle that baffles everyone looking at this, because these things are described as prediction machines.
CA:所以我認(rèn)為這有助于解釋 使人困惑的謎題 每個(gè)人都在看這個(gè), 因?yàn)檫@些東西被描述了 作為預(yù)測(cè)機(jī)器。
【228】And yet, what we're seeing out of them feels ...
然而,我們所看到的 他們覺得。。。
【229】it just feels impossible that that could come from a prediction machine.
感覺不可能 可能來自預(yù)測(cè)機(jī)器。
【230】Just the stuff you showed us just now.
只是你剛才給我們看的東西。
【231】And the key idea of emergence is that when you get more of a thing, suddenly different things emerge.
以及出現(xiàn)的關(guān)鍵理念 當(dāng)你得到更多的東西時(shí), 突然出現(xiàn)了不同的事情。
【232】It happens all the time, ant colonies, single ants run around, when you bring enough of them together, you get these ant colonies that show completely emergent, different behavior.
它總是發(fā)生,蟻群, 單個(gè)螞蟻四處奔跑, 當(dāng)你把足夠多的人聚集在一起時(shí), 你會(huì)看到這些蟻群 完全突發(fā)的、不同的行為。
【233】Or a city where a few houses together, it's just houses together.
或者一個(gè)有幾棟房子的城市, 只是房子在一起。
【234】But as you grow the number of houses, things emerge, like suburbs and cultural centers and traffic jams.
但隨著房屋數(shù)量的增長(zhǎng), 事情出現(xiàn)了,比如郊區(qū) 文化中心和交通堵塞。
【235】Give me one moment for you when you saw just something pop that just blew your mind that you just did not see coming.
給我一點(diǎn)時(shí)間 當(dāng)你看到一些流行的東西 這讓你大吃一驚 你只是沒有看到它的到來。
【236】GB: Yeah, well, so you can try this in ChatGPT, if you add 40-digit numbers -- CA: 40-digit?
GB:是的,嗯, 所以你可以在ChatGPT中嘗試一下, 如果你加上40位數(shù)字-- CA:40位?
【237】GB: 40-digit numbers, the model will do it, which means it's really learned an internal circuit for how to do it.
GB:40位數(shù)字, 模型會(huì)做到這一點(diǎn), 這意味著它真的學(xué)到了 如何做到這一點(diǎn)的內(nèi)部電路。
【238】And the really interesting thing is actually, if you have it add like a 40-digit number plus a 35-digit number, it'll often get it wrong.
還有真正有趣的 事實(shí)上, 如果你把它加起來像一個(gè)40位的數(shù)字 加上一個(gè)35位數(shù)字, 它經(jīng)常會(huì)出錯(cuò)。
【239】And so you can see that it's really learning the process, but it hasn't fully generalized, right?
所以你可以看到 學(xué)習(xí)該過程, 但它還沒有完全概括,對(duì)吧?
【240】It's like you can't memorize the 40-digit addition table, that's more atoms than there are in the universe.
就像你記不住一樣 40位加法表, 那是更多的原子 比宇宙中存在的還要多。
【241】So it had to have learned something general, but that it hasn't really fully yet learned that, Oh, I can sort of generalize this to adding arbitrary numbers of arbitrary lengths.
所以它必須學(xué)會(huì) 一般情況下, 但事實(shí)并非如此 充分了解到, 哦,我可以概括一下 添加任意數(shù)字 任意長(zhǎng)度。
【242】CA: So what's happened here is that you've allowed it to scale up and look at an incredible number of pieces of text.
CA:那么這里發(fā)生了什么 是你允許它擴(kuò)大規(guī)模 看著一個(gè)不可思議的 文本條數(shù)。
【243】And it is learning things that you didn't know that it was going to be capable of learning.
它在學(xué)習(xí) 你不知道 將有能力學(xué)習(xí)。
【244】GB Well, yeah, and it’s more nuanced, too.
GB嗯,是的,而且??它也更微妙。
【245】So one science that we’re starting to really get good at is predicting some of these emergent capabilities.
因此,我們??重新啟動(dòng) 真正擅長(zhǎng) 正在預(yù)測(cè)其中的一些 應(yīng)急能力。
【246】And to do that actually, one of the things I think is very undersung in this field is sort of engineering quality.
實(shí)際上,要做到這一點(diǎn), 我認(rèn)為 在這個(gè)領(lǐng)域表現(xiàn)很差 是一種工程質(zhì)量。
【247】Like, we had to rebuild our entire stack.
就像,我們必須重建我們的整個(gè)堆棧。
【248】When you think about building a rocket, every tolerance has to be incredibly tiny.
當(dāng)你考慮建造火箭時(shí), 每一個(gè)容忍度都必須非常小。
【249】Same is true in machine learning.
機(jī)器學(xué)習(xí)也是如此。
【250】You have to get every single piece of the stack engineered properly, and then you can start doing these predictions.
你必須得到每一件 正確設(shè)計(jì)的堆疊, 然后你就可以開始了 做這些預(yù)測(cè)。
【251】There are all these incredibly smooth scaling curves.
所有這些都令人難以置信 平滑縮放曲線。
【252】They tell you something deeply fundamental about intelligence.
他們深深地告訴你一些事情 智力的基礎(chǔ)。
【253】If you look at our GPT-4 blog post, you can see all of these curves in there.
如果你看看我們的GPT-4博客文章, 你可以在那里看到所有這些曲線。
【254】And now we're starting to be able to predict.
現(xiàn)在我們開始了 能夠預(yù)測(cè)。
【255】So we were able to predict, for example, the performance on coding problems.
因此我們能夠預(yù)測(cè),例如, 編碼問題的性能。
【256】We basically look at some models that are 10,000 times or 1,000 times smaller.
我們基本上看一些模型 那是一萬次 或者小1000倍。
【257】And so there's something about this that is actually smooth scaling, even though it's still early days.
所以這里面有些東西 這實(shí)際上是平滑縮放, 盡管現(xiàn)在還為時(shí)過早。
【258】CA: So here is, one of the big fears then, that arises from this.
CA:所以,這是當(dāng)時(shí)最大的恐懼之一, 由此產(chǎn)生的。
【259】If it’s fundamental to what’s happening here, that as you scale up, things emerge that you can maybe predict in some level of confidence, but it's capable of surprising you.
如果它??s基礎(chǔ) 到什么??發(fā)生在這里, 隨著規(guī)模的擴(kuò)大, 事情發(fā)生了 你也許可以預(yù)測(cè) 在某種程度的置信度中, 但它能讓你大吃一驚。
【260】Why isn't there just a huge risk of something truly terrible emerging?
為什么不存在巨大的風(fēng)險(xiǎn) 真正可怕的事情出現(xiàn)了嗎?
【261】GB: Well, I think all of these are questions of degree and scale and timing.
GB:嗯,我認(rèn)為所有這些 是學(xué)位問題 以及規(guī)模和時(shí)間安排。
【262】And I think one thing people miss, too, is sort of the integration with the world is also this incredibly emergent, sort of, very powerful thing too.
我認(rèn)為人們也會(huì)錯(cuò)過一件事, 是一種和世界的融合 也是這種令人難以置信的緊急情況, 也是一種非常強(qiáng)大的東西。
【263】And so that's one of the reasons that we think it's so important to deploy incrementally.
這也是原因之一 我們認(rèn)為這很重要 以增量部署。
【264】And so I think that what we kind of see right now, if you look at this talk, a lot of what I focus on is providing really high-quality feedback.
所以我認(rèn)為我們所看到的 現(xiàn)在,如果你看看這個(gè)演講, 我專注于提供 真正高質(zhì)量的反饋。
【265】Today, the tasks that we do, you can inspect them, right?
今天,我們所做的任務(wù), 你可以檢查一下,對(duì)吧?
【266】It's very easy to look at that math problem and be like, no, no, no, machine, seven was the correct answer.
看那個(gè)數(shù)學(xué)很容易 問題,然后說,不,不, 機(jī)器,七是正確答案。
【267】But even summarizing a book, like, that's a hard thing to supervise.
但即使是總結(jié)一本書, 就像,這是一件很難監(jiān)督的事情。
【268】Like, how do you know if this book summary is any good?
比如,你怎么知道 這本書的摘要有什么好的嗎?
【269】You have to read the whole book.
你必須通讀整本書。
【270】No one wants to do that.
沒有人愿意那樣做。
【271】(Laughter) And so I think that the important thing will be that we take this step by step.
(眾笑) 所以我認(rèn)為重要的是 我們將一步一步地采取這一行動(dòng)。
【272】And that we say, OK, as we move on to book summaries, we have to supervise this task properly.
我們說,好吧, 當(dāng)我們繼續(xù)閱讀書籍摘要時(shí), 我們必須正確地監(jiān)督這項(xiàng)任務(wù)。
【273】We have to build up a track record with these machines that they're able to actually carry out our intent.
我們必須建立 使用這些機(jī)器的記錄 他們能夠 實(shí)現(xiàn)我們的意圖。
【274】And I think we're going to have to produce even better, more efficient, more reliable ways of scaling this, sort of like making the machine be aligned with you.
我認(rèn)為我們必須生產(chǎn) 甚至更好、更高效, 更可靠的縮放方式, 有點(diǎn)像制造機(jī)器 與你保持一致。
【275】CA: So we're going to hear later in this session, there are critics who say that, you know, there's no real understanding inside, the system is going to always --
CA:所以我們將聽到 在該會(huì)話的稍后, 有評(píng)論家說, 你知道,沒有真正的 理解內(nèi)在, 系統(tǒng)將始終--
【276】we're never going to know that it's not generating errors, that it doesn't have common sense and so forth.
我們永遠(yuǎn)不會(huì)知道 它不會(huì)產(chǎn)生錯(cuò)誤, 它沒有 常識(shí)等等。
【277】Is it your belief, Greg, that it is true at any one moment, but that the expansion of the scale and the human feedback that you talked about is basically going to take it on that journey
這是你的信仰嗎,Greg, 在任何一刻都是真的, 但規(guī)模的擴(kuò)大 以及人類的反饋 你所說的基本上是 將帶著它踏上旅程
【278】of actually getting to things like truth and wisdom and so forth, with a high degree of confidence.
真正去做事情 就像真理和智慧等等, 以高度的自信。
【279】Can you be sure of that?
你能確定嗎?
【280】GB: Yeah, well, I think that the OpenAI, I mean, the short answer is yes, I believe that is where we're headed.
GB:是的,我認(rèn)為OpenAI, 我的意思是,簡(jiǎn)短的回答是肯定的, 我相信這就是我們前進(jìn)的方向。
【281】And I think that the OpenAI approach here has always been just like, let reality hit you in the face, right?
我認(rèn)為OpenAI方法 這里一直都是這樣, 讓現(xiàn)實(shí)打在你臉上吧?
【282】It's like this field is the field of broken promises, of all these experts saying X is going to happen, Y is how it works.
就好像這個(gè)領(lǐng)域就是這個(gè)領(lǐng)域 違背承諾, 所有這些專家都說 X是會(huì)發(fā)生的,Y是它的運(yùn)作方式。
【283】People have been saying neural nets aren't going to work for 70 years.
人們一直在說神經(jīng)網(wǎng)絡(luò) 70年內(nèi)都不會(huì)工作。
【284】They haven't been right yet.
他們還沒有說對(duì)。
【285】They might be right maybe 70 years plus one or something like that is what you need.
他們可能是對(duì)的 也許70年加1年 或者類似的東西就是你需要的。
【286】But I think that our approach has always been, you've got to push to the limits of this technology to really see it in action, because that tells you then, oh, here's how we can move on to a new paradigm.
但我認(rèn)為我們的方法 一直以來, 你必須突破極限 這項(xiàng)技術(shù) 為了真正看到它的作用, 因?yàn)檫@告訴了你,哦,這是 我們?nèi)绾尾拍苓M(jìn)入一個(gè)新的范式。
【287】And we just haven't exhausted the fruit here.
我們只是還沒有筋疲力盡 這里的水果。
【288】CA: I mean, it's quite a controversial stance you've taken, that the right way to do this is to put it out there in public and then harness all this, you know, instead of just your team giving feedback, the world is now giving feedback.
CA:我的意思是,很不錯(cuò) 你所采取的有爭(zhēng)議的立場(chǎng), 這是正確的做法 就是在公眾面前 然后利用這一切,你知道, 而不僅僅是你的團(tuán)隊(duì)給出反饋, 現(xiàn)在全世界都在給予反饋。
【289】But ...
但是
【290】If, you know, bad things are going to emerge, it is out there.
如果,你知道,不好的事情 即將出現(xiàn), 它就在那里。
【291】So, you know, the original story that I heard on OpenAI when you were founded as a nonprofit, well you were there as the great sort of check on the big companies doing their unknown, possibly evil thing with AI.
所以,你知道,最初的故事 我在OpenAI上聽到的 當(dāng)你作為一個(gè)非營利組織成立時(shí), 好吧,你在那里是偉大的 對(duì)大公司進(jìn)行某種檢查 做他們未知的事情, 人工智能可能是邪惡的東西。
【292】And you were going to build models that sort of, you know, somehow held them accountable and was capable of slowing the field down, if need be.
然后你要建立模型 那種,你知道的, 以某種方式追究他們的責(zé)任 并且能夠減速 如果需要的話,把場(chǎng)地放下。
【293】Or at least that's kind of what I heard.
或者至少我是這么聽說的。
【294】And yet, what's happened, arguably, is the opposite.
然而,發(fā)生了什么, 可以說恰恰相反。
【295】That your release of GPT, especially ChatGPT, sent such shockwaves through the tech world that now Google and Meta and so forth are all scrambling to catch up.
你發(fā)布的GPT, 尤其是ChatGPT, 發(fā)出這樣的沖擊波 通過科技世界 現(xiàn)在谷歌和Meta等等 都在爭(zhēng)先恐后地追趕。
【296】And some of their criticisms have been, you are forcing us to put this out here without proper guardrails or we die.
他們的一些批評(píng)是, 你強(qiáng)迫我們把這個(gè)放在這里 沒有合適的護(hù)欄,否則我們會(huì)死。
【297】You know, how do you, like, make the case that what you have done is responsible here and not reckless.
你知道,你怎么會(huì), 證明你所做的 在這里是負(fù)責(zé)任的,而不是魯莽的。
【298】GB: Yeah, we think about these questions all the time.
GB:是的,我們考慮過這些 總是有問題。
【299】Like, seriously all the time.
就像,一直很認(rèn)真。
【300】And I don't think we're always going to get it right.
我不認(rèn)為我們總是 會(huì)把事情做好的。
【301】But one thing I think has been incredibly important, from the very beginning, when we were thinking about how to build artificial general intelligence, actually have it benefit all of humanity, like, how are you supposed to do that, right?
但有一件事我認(rèn)為 非常重要, 從一開始, 當(dāng)我們思考 關(guān)于如何構(gòu)建 通用人工智能, 讓它造福全人類, 比如,你好嗎 應(yīng)該這么做,對(duì)吧?
【302】And that default plan of being, well, you build in secret, you get this super powerful thing, and then you figure out the safety of it and then you push “go,” and you hope you got it right.
而默認(rèn)的計(jì)劃是, 好吧,你在秘密中建立, 你得到了這個(gè)超級(jí)強(qiáng)大的東西, 然后你就知道了它的安全性 然后你推??去?? 你希望你做對(duì)了。
【303】I don't know how to execute that plan.
我不知道如何執(zhí)行那個(gè)計(jì)劃。
【304】Maybe someone else does.
也許其他人會(huì)這樣做。
【305】But for me, that was always terrifying, it didn't feel right.
但對(duì)我來說,這總是很可怕, 感覺不對(duì)勁。
【306】And so I think that this alternative approach is the only other path that I see, which is that you do let reality hit you in the face.
所以我認(rèn)為 替代方法 是我唯一看到的另一條路, 你讓哪個(gè) 現(xiàn)實(shí)打在了你的臉上。
【307】And I think you do give people time to give input.
我認(rèn)為你確實(shí)給了人們 提供意見的時(shí)間。
【308】You do have, before these machines are perfect, before they are super powerful, that you actually have the ability to see them in action.
你確實(shí)有,在這些之前 機(jī)器是完美的, 在它們超級(jí)強(qiáng)大之前, 你真的有能力 看看他們?cè)谛袆?dòng)。
【309】And we've seen it from GPT-3, right?
我們已經(jīng)從GPT-3中看到了,對(duì)吧?
【310】GPT-3, we really were afraid that the number one thing people were going to do with it was generate misinformation, try to tip elections.
GPT-3,我們真的很害怕 第一件事 人們會(huì)接受它 產(chǎn)生錯(cuò)誤信息, 試圖為選舉提供線索。
【311】Instead, the number one thing was generating Viagra spam.
相反,第一件事 正在生成偉哥垃圾郵件。
【312】(Laughter) CA: So Viagra spam is bad, but there are things that are much worse.
(眾笑) CA:所以偉哥垃圾郵件很糟糕, 但有些事情要糟糕得多。
【313】Here's a thought experiment for you.
這是給你的一個(gè)思維實(shí)驗(yàn)。
【314】Suppose you're sitting in a room, there's a box on the table.
假設(shè)你坐在一個(gè)房間里, 桌子上有一個(gè)盒子。
【315】You believe that in that box is something that, there's a very strong chance it's something absolutely glorious that's going to give beautiful gifts to your family and to everyone.
你相信在那個(gè)盒子里 是這樣的東西, 有很大的機(jī)會(huì) 這真是太棒了 那會(huì)送上漂亮的禮物 獻(xiàn)給你的家人和每一個(gè)人。
【316】But there's actually also a one percent thing in the small print there that says: “Pandora.” And there's a chance that this actually could unleash unimaginable evils on the world.
但實(shí)際上也有百分之一 小字上的東西 上面寫著:??潘多拉。?? 還有機(jī)會(huì) 這實(shí)際上可以釋放 世界上難以想象的邪惡。
【317】Do you open that box?
你打開那個(gè)盒子了嗎?
【318】GB: Well, so, absolutely not.
GB:嗯,所以,絕對(duì)不是。
【319】I think you don't do it that way.
我認(rèn)為你不會(huì)那樣做。
【320】And honestly, like, I'll tell you a story that I haven't actually told before, which is that shortly after we started OpenAI,
老實(shí)說,我給你講個(gè)故事 我以前沒有告訴過, 就是這么快 在我們啟動(dòng)OpenAI之后,
【321】I remember I was in Puerto Rico for an AI conference.
我記得我在波多黎各 參加人工智能會(huì)議。
【322】I'm sitting in the hotel room just looking out over this wonderful water, all these people having a good time.
我就坐在酒店房間里 望著這美妙的水面, 所有這些人都玩得很開心。
【323】And you think about it for a moment, if you could choose for basically that Pandora’s box to be five years away or 500 years away, which would you pick, right?
你想一想, 如果你基本上可以選擇 那個(gè)潘多拉??s框 還有五年 或者500年后, 你會(huì)選哪一個(gè),對(duì)吧?
【324】On the one hand you're like, well, maybe for you personally, it's better to have it be five years away.
一方面,你喜歡, 好吧,也許對(duì)你個(gè)人來說, 最好是在五年后。
【325】But if it gets to be 500 years away and people get more time to get it right, which do you pick?
但如果它在500年后 人們有更多的時(shí)間把事情做好, 你選哪個(gè)?
【326】And you know, I just really felt it in the moment.
你知道,我只是 此刻真的感覺到了。
【327】I was like, of course you do the 500 years.
我想,當(dāng)然 你做了500年。
【328】My brother was in the military at the time and like, he puts his life on the line in a much more real way than any of us typing things in computers and developing this technology at the time.
我哥哥當(dāng)時(shí)在軍隊(duì)服役 他冒著生命危險(xiǎn) 以一種更真實(shí)的方式 比我們?nèi)魏稳嗽陔娔X上打字都要多 以及開發(fā) 當(dāng)時(shí)的技術(shù)。
【329】And so, yeah, I'm really sold on the you've got to approach this right.
所以,是的,我真的很受歡迎 在上,你必須正確處理這個(gè)問題。
【330】But I don't think that's quite playing the field as it truly lies.
但我不這么認(rèn)為 按照真正的謊言來比賽。
【331】Like, if you look at the whole history of computing, I really mean it when I say that this is an industry-wide or even just almost like a human-development- of-technology-wide shift.
比如,如果你從整體上看 計(jì)算歷史, 我說這話的時(shí)候是認(rèn)真的 這是一個(gè)全行業(yè)的 甚至差不多 人類的發(fā)展- 技術(shù)范圍的轉(zhuǎn)變。
【332】And the more that you sort of, don't put together the pieces that are there, right, we're still making faster computers, we're still improving the algorithms, all of these things, they are happening.
你越是這樣, 不要把碎片拼湊起來 就在那里,對(duì)吧, 我們?nèi)栽谥圃焖俣雀斓挠?jì)算機(jī), 我們?nèi)栽诟倪M(jìn)算法, 所有這些事情都在發(fā)生。
【333】And if you don't put them together, you get an overhang, which means that if someone does, or the moment that someone does manage to connect to the circuit, then you suddenly have this very powerful thing, no one's had any time to adjust, who knows what kind of safety precautions you get.
如果你不把它們放在一起, 你有一個(gè)懸挑, 這意味著如果有人這樣做, 或者有人成功的那一刻 為了連接到電路, 然后你突然 這個(gè)非常強(qiáng)大的東西, 沒有人有時(shí)間調(diào)整, 誰知道是哪種 你得到的安全預(yù)防措施。
【334】And so I think that one thing I take away is like, even you think about development of other sort of technologies, think about nuclear weapons, people talk about being like a zero to one, sort of, change in what humans could do.
所以我認(rèn)為 我拿走的一件東西 就像,即使你考慮發(fā)展 在其他種類的技術(shù)中, 想想核武器, 人們談?wù)摯嬖?像零對(duì)一, 在某種程度上,改變了人類的能力。
【335】But I actually think that if you look at capability, it's been quite smooth over time.
但實(shí)際上我認(rèn)為 如果你看能力, 隨著時(shí)間的推移,一切都很順利。
【336】And so the history, I think, of every technology we've developed has been, you've got to do it incrementally and you've got to figure out how to manage it for each moment that you're increasing it.
因此,我認(rèn)為歷史, 我們開發(fā)的每一項(xiàng)技術(shù) 已經(jīng),你已經(jīng) 以增量方式進(jìn)行 你必須弄清楚 如何管理它 每增加一刻。
【337】CA: So what I'm hearing is that you ...
CA:所以我聽到的是你。。。
【338】the model you want us to have is that we have birthed this extraordinary child that may have superpowers that take humanity to a whole new place.
你想讓我們擁有的模型 是因?yàn)槲覀兩撕⒆?這個(gè)非凡的孩子 可能有超能力 將人類帶到一個(gè)全新的地方。
【339】It is our collective responsibility to provide the guardrails for this child to collectively teach it to be wise and not to tear us all down.
這是我們的集體責(zé)任 提供護(hù)欄 為了這個(gè)孩子 集體教導(dǎo)它要明智 不要把我們都打倒。
【340】Is that basically the model?
這基本上就是模型嗎?
【341】GB: I think it's true.
GB:我認(rèn)為這是真的。
【342】And I think it's also important to say this may shift, right?
我認(rèn)為這也很重要 說這可能會(huì)改變,對(duì)吧?
【343】We've got to take each step as we encounter it.
我們必須邁出每一步 正如我們遇到的那樣。
【344】And I think it's incredibly important today that we all do get literate in this technology, figure out how to provide the feedback, decide what we want from it.
我覺得這太不可思議了 今天很重要 我們都能識(shí)字 在該技術(shù)中, 弄清楚如何提供反饋, 決定我們想要從中得到什么。
【345】And my hope is that that will continue to be the best path, but it's so good we're honestly having this debate because we wouldn't otherwise if it weren't out there.
我希望這會(huì) 繼續(xù)成為最佳路徑, 但我們真的很好 進(jìn)行這場(chǎng)辯論 因?yàn)榉駝t我們不會(huì) 如果它不在那里的話。
【346】CA: Greg Brockman, thank you so much for coming to TED and blowing our minds.
CA:Greg Brockman,非常感謝 感謝你來到TED,讓我們大吃一驚。