ChatGPT 教英語 Generative AI: what is it good for?
以下由AI自動生成
source: https://www.youtube.com/watch?v=gCDacaohqaA
Generative AI is the technology behind the wave of new online tools used by millions around the world. Some can answer queries on a huge range of topics in conversational language. Others can generate realistic looking photographs from short text prompts. As the technology is ever more widely deployed, what are its current strengths and its weaknesses? The Economist's science editor Alok Jha is joined by deputy editor Tom Standage, science correspondent Abbey Vertex and global business and economics correspondent Arjun Ramani to discuss this new era of AI. What happened in 2017 was that some researchers at Google came up with a better attention mechanism called the transformer. And that's what the T in GPT stands for. And so, essentially, that made these systems a lot better. They could come up with longer pieces of coherent output, whether that's text or computer code or whatever. So there was a technical break through, and that took a while to ripple through the community. So that's one of the things that changed. But the other thing that changed is this technology became much more visible. What happened last year is that a much more capable model, GPT-3.5, was launched as chat-GPT, which anyone could sign up for. Once you got to the front of the waiting list, you could go and talk to it. And you'll have heard these numbers, and 100 million people tried it within the first two months. And that's reckoned to be the fastest adoption of a consumer technology in history. So the thing that really changed is that suddenly there was a way that anyone could use this technology. And they came up with all sorts of amazing uses for it and asked it to do all sorts of extraordinary things. And that was what really put it on the map. I think one of the huge strengths of these large language models is that they're able to crunch and churn through such scads of unlabelled data. So in the past with AI, you always needed your thing and a label. So that required humans to go through and label them. But with large language models, you just get a blurry picture that is basically taken of hundreds of millions of words. And it honestly just seems to do well. I think a lot of people are still baffled. It generates convincing text. It's very good at pattern matching. Style transfer is one of the other things. You ask it to write a love letter in the style of a pirate from the 14th century with an Irish accent from the Bahamas. It's also pretty good at passing standardised tests. It passed the US Medical Licensing exam. It's passed some legal tests. Basically very good at text things. At the moment, one of the big opportunities is writing code. The great thing about writing code with these systems, and I do still write some code, is that if the code is slightly wrong, you find out straight away. Because either the interpreter or the compiler chokes on it, or the output of the code isn't quite what you were expecting. So you have this very tight feedback loop. And if it's slightly wrong, you find out pretty quickly. In terms of weaknesses, I think one of them is the lack of transparency. It's kind of a black box. You can have access to what's going on inside, the attention weights, what those are. But they don't mean a lot to us. There's over 100 billion of these weights. That's very hard for us to understand what they're doing. I think the main weakness is that it's such a complex system. We don't understand it fully. If your job is to find out new facts, that's not something these systems are in a good position to do. I was talking to people in the British government's foreign office the other day. I was saying, our job, whether we work in government, the intelligence services or journalism, is to find new facts. And they've got to be right. You don't want to just take any old stuff coming out of one of these systems. If the accuracy matters, then these systems are maybe not so great. The reliability of the models need to be improved before they start automating huge amounts of processes. There's a huge amount of economic activity that will get affected by this. The paper put out by some economists at Open AI said around 20% of the US workforce could have around 50% of their tasks affected by generative AI in the next few years. A lot of tasks we do on a day-to-day basis could be helped by these models. There's some research in the Economics of Innovation that talks about how if you want to get an intelligence explosion, or exponentially increasing economic growth in any given domain, you need to automate the entire process. If you've only automated 90% of it, or 99%, that doesn't get you nearly the same effect. The slowest part of the process, which is the human, acts as a rate-determining step. We'll end up slowing things down. That's what is likely to happen in my mind. We use AI to help us with research, which we are already doing, but it still is not able to get 100% of the way. So ultimately the pace of progress continues as it has been. They would have become superintelligent and turned it into paperclips if it wasn't for pesky humans getting in the way. Thank you.
Section 1: Important Words
1. Generative - 產(chǎn)生的,具有創(chuàng)造性的。Example sentence: Generative AI has become a popular topic in the tech industry.
2. Coherent - 連貫的,一致的。Example sentence: The AI's output was coherent and easy to understand.
3. Churn - 大量混雜的。Example sentence: The language model was able to churn through a mountain of unlabelled data.
4. Baffled - 迷惑的。Example sentence: Many people are still baffled by the technology behind large language models.
5. Passing - 通過的,成功的。Example sentence: The AI was able to pass the US Medical Licensing exam.
6. Code - 代碼,程序。Example sentence: The AI is currently being used to write code.
7. Transparency - 透明度,清晰度。Example sentence: There is a lack of transparency when it comes to understanding how the AI works.
8. Accuracy - 準(zhǔn)確度,精度。Example sentence: Accuracy matters when it comes to using AI in important tasks.
9. Reliability - 可靠性。Example sentence: The reliability of the models needs to be improved before they can be widely used.
10. Automating - 自動化。Example sentence: Generative AI has the potential to automate many processes.
Section 2: Important Grammars
1. Passive voice - used to describe actions that are done to the subject of the sentence, rather than the subject performing the action. Example sentence: The US Medical Licensing exam was passed by the AI.
2. Comparative adjectives - used to compare the qualities of two or more things. Example sentence: GPT-3.5 was much more capable than its predecessor.
3. Conditional statements - used to describe actions or events that are dependent on a certain condition being met. Example sentence: If the code is slightly wrong, the interpreter or compiler chokes on it.
Section 3: Full Translation
生成式人工智能是新一輪在線工具背后的技術(shù),被全球數(shù)百萬人使用。其中一些可以以對話方式回答大量話題的查詢。其他人可以根據(jù)簡短的文本提示生成逼真的照片。隨著這種技術(shù)越來越廣泛地部署,它的當(dāng)前優(yōu)勢和劣勢是什么?經(jīng)濟(jì)學(xué)家的科學(xué)編輯Alok Jha與副編輯Tom Standage、科學(xué)記者Abbey Vertex和全球商業(yè)和經(jīng)濟(jì)記者Arjun Ramani共同討論了這個新時代的人工智能。2017年發(fā)生的一件事情是,谷歌的一些研究人員提出了更好的注意力機(jī)制,稱為變壓器。這就是GPT中T的含義。因此,這實(shí)際上使這些系統(tǒng)更好。它們可以提供更長的連貫輸出,無論是文本還是計算機(jī)代碼或其他任何東西。因此出現(xiàn)了技術(shù)突破,并花費(fèi)了一段時間才在社區(qū)中傳播。這是其中之一發(fā)生變化的事情。但另一件事情改變了,這項(xiàng)技術(shù)變得更加可見。去年發(fā)生的事情是,一種更有能力的模型GPT-3.5作為聊天-GPT推出,任何人都可以注冊。一旦你到了等待列表的前面,你就可以去跟它聊天了。你會聽到這些數(shù)字,根據(jù)統(tǒng)計,在前兩個月內(nèi)有1億人嘗試了它。這被認(rèn)為是歷史上消費(fèi)技術(shù)最快的采用。因此,真正改變的是,突然之間有了任何人都可以使用這項(xiàng)技術(shù)的方法。他們想出了各種令人驚嘆的用途,并要求它做所有非凡的事情。這真正將它放在了地圖上。我認(rèn)為這些大型語言模型的巨大優(yōu)勢之一是,它們能夠處理大量未標(biāo)記數(shù)據(jù)。因此,在過去的AI中,您總是需要自己的東西和一個標(biāo)簽。因此,這需要人員對其進(jìn)行標(biāo)記。但對于大型語言模型,您只是得到了大約數(shù)億個單詞的模糊圖像。它們實(shí)際上看起來不錯。我認(rèn)為很多人仍然感到困惑。它生成了令人信服的文本。它非常擅長模式匹配。樣式轉(zhuǎn)移是其中的另一種方式。您可以要求它用來自14世紀(jì)的海盜的的風(fēng)格和巴哈馬的愛爾蘭口音寫情書。它也相當(dāng)擅長通過標(biāo)準(zhǔn)化測試。它通過了美國醫(yī)學(xué)許可考試。它通過了一些法律測試?;旧戏浅I瞄L文本方面的事情。目前,其中一個重要機(jī)會是編寫代碼。使用這些系統(tǒng)編寫代碼的好處是,如果代碼略有錯誤,您會立即發(fā)現(xiàn)。因?yàn)榻忉屍骰蚓幾g器會對其進(jìn)行解釋,或者代碼的輸出并不完全是您預(yù)期的輸出。因此,您擁有非常緊密的反饋循環(huán)。如果它略有錯誤,您很快就會發(fā)現(xiàn)。在劣勢方面,我認(rèn)為其中一個問題是缺乏透明度。它有點(diǎn)像黑匣子。您可以訪問正在進(jìn)行的內(nèi)部情況,關(guān)注權(quán)重等。但它們對我們來說并沒有什么意義。這些權(quán)重有超過100億個。對我們來說很難理解它們在做什么。我認(rèn)為主要的弱點(diǎn)是它是一個如此復(fù)雜的系統(tǒng),我們并不完全了解它。如果您的工作是找出新的事實(shí),那么這不是這些系統(tǒng)的強(qiáng)項(xiàng)。我最近與英國政府外交部門的人談到了這一點(diǎn)。我說,無論我們是在政府、情報機(jī)構(gòu)還是新聞界工作,我們的工作都是找到新的事實(shí)。并且它們必須是正確的。您不希望只因?yàn)檫@些系統(tǒng)中的一部分而得到任何舊東西。如果準(zhǔn)確性很重要,則這些系統(tǒng)可能不是很好。在它們開始自動化大量過程之前,這些模型的可靠性需要得到改善。許多我們?nèi)粘9ぷ鞯娜蝿?wù)都可以通過這些模型得到幫助。有一些創(chuàng)新經(jīng)濟(jì)學(xué)的研究談?wù)摿艘患虑?,即如果你想在任何給定領(lǐng)域?qū)崿F(xiàn)智能爆發(fā),或呈指數(shù)增長的經(jīng)濟(jì)增長,那么你需要自動化整個過程。如果您只自動化了90%或99%,那并不能讓您獲得相同的效果。流程最慢的部分,即人類,將成為決定步驟的速率。我們最終會拖累事情的進(jìn)展。我們使用AI來幫助我們進(jìn)行研究,我們已經(jīng)在這方面做得很好,但它仍然無法完全解決問題。因此最終的進(jìn)展速度會像以前一樣。如果不是因?yàn)槁闊┑娜祟悡趿寺罚鼈儗⒆兊贸壷悄?,并將其變成紙片。謝謝。
0:00:00Generative AI is the technology behind the wave of new online tools used by millions around the world.
0:00:08 Some can answer queries on a huge range of topics in conversational language.
0:00:15 Others can generate realistic looking photographs from short text prompts.
0:00:21 As the technology is ever more widely deployed, what are its current strengths and its weaknesses?
0:00:29 The Economist's science editor Alok Jha is joined by deputy editor Tom Standage, science correspondent Abbey Vertex and global business and economics correspondent Arjun Ramani to discuss this new era of AI.
0:00:59 What happened in 2017 was that some researchers at Google came up with a better attention mechanism called the transformer.
0:01:04 And that's what the T in GPT stands for. And so, essentially, that made these systems a lot better.
0:01:12 They could come up with longer pieces of coherent output, whether that's text or computer code or whatever.
0:01:18 So there was a technical break through, and that took a while to ripple through the community.
0:01:24 So that's one of the things that changed. But the other thing that changed is this technology became much more visible.
0:01:29 What happened last year is that a much more capable model, GPT-3.5, was launched as chat-GPT, which anyone could sign up for.
0:01:39 Once you got to the front of the waiting list, you could go and talk to it.
0:01:42 And you'll have heard these numbers, and 100 million people tried it within the first two months.
0:01:47 And that's reckoned to be the fastest adoption of a consumer technology in history.
0:01:51 So the thing that really changed is that suddenly there was a way that anyone could use this technology.
0:01:57 And they came up with all sorts of amazing uses for it and asked it to do all sorts of extraordinary things.
0:02:02 And that was what really put it on the map.
0:02:03 I think one of the huge strengths of these large language models is that they're able to crunch and churn through such scads of unlabelled data.
0:02:13 So in the past with AI, you always needed your thing and a label.
0:02:18 So that required humans to go through and label them.
0:02:22 But with large language models, you just get a blurry picture that is basically taken of hundreds of millions of words.
0:02:30 And it honestly just seems to do well. I think a lot of people are still baffled.
0:02:36 It generates convincing text. It's very good at pattern matching. Style transfer is one of the other things.
0:02:44 You ask it to write a love letter in the style of a pirate from the 14th century with an Irish accent from the Bahamas.
0:02:54 It's also pretty good at passing standardised tests. It passed the US Medical Licensing exam. It's passed some legal tests.
0:03:04 Basically very good at text things.
0:03:07 At the moment, one of the big opportunities is writing code.
0:03:11 The great thing about writing code with these systems, and I do still write some code, is that if the code is slightly wrong, you find out straight away.
0:03:20 Because either the interpreter or the compiler chokes on it, or the output of the code isn't quite what you were expecting.
0:03:26 So you have this very tight feedback loop. And if it's slightly wrong, you find out pretty quickly.
0:03:32 In terms of weaknesses, I think one of them is the lack of transparency. It's kind of a black box.
0:03:38 You can have access to what's going on inside, the attention weights, what those are. But they don't mean a lot to us.
0:03:46 There's over 100 billion of these weights. That's very hard for us to understand what they're doing.
0:03:54 I think the main weakness is that it's such a complex system. We don't understand it fully.
0:04:00 If your job is to find out new facts, that's not something these systems are in a good position to do.
0:04:06 I was talking to people in the British government's foreign office the other day.
0:04:12 I was saying, our job, whether we work in government, the intelligence services or journalism, is to find new facts.
0:04:18 And they've got to be right. You don't want to just take any old stuff coming out of one of these systems.
0:04:25 If the accuracy matters, then these systems are maybe not so great.
0:04:29 The reliability of the models need to be improved before they start automating huge amounts of processes.
0:04:37 There's a huge amount of economic activity that will get affected by this.
0:04:43 The paper put out by some economists at Open AI said around 20% of the US workforce could have around 50% of their tasks affected by generative AI in the next few years.
0:04:55 A lot of tasks we do on a day-to-day basis could be helped by these models.
0:05:01 There's some research in the Economics of Innovation that talks about how if you want to get an intelligence explosion,
0:05:09 or exponentially increasing economic growth in any given domain, you need to automate the entire process.
0:05:18 If you've only automated 90% of it, or 99%, that doesn't get you nearly the same effect.
0:05:26 The slowest part of the process, which is the human, acts as a rate-determining step.
0:05:33 We'll end up slowing things down. That's what is likely to happen in my mind.
0:05:40 We use AI to help us with research, which we are already doing, but it still is not able to get 100% of the way.
0:05:49 So ultimately the pace of progress continues as it has been.
0:05:53 They would have become superintelligent and turned it into paperclips if it wasn't for pesky humans getting in the way.
0:06:10 Thank you.