最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊(cè)

(英字幕)The urgent risks of runaway AI -- a

2023-05-14 11:22 作者:努力考研的二十八  | 我要投稿

I’m here to talk about the possibility of global AI governance.?I first learned to code when I was eight years old,?on a paper computer,?and I've been in love with AI ever since.?In high school,?I got myself a Commodore 64 and worked on machine translation.?I built a couple of AI companies, I sold one of them to Uber.?I love AI, but right now I'm worried.

00:24

One of the things that I’m worried about is misinformation,?the possibility that bad actors will make a tsunami of misinformation?like we've never seen before.?These tools are so good at making convincing narratives?about just about anything.

00:38

If you want a narrative about TED and how it's dangerous,?that we're colluding here with space aliens,?you got it, no problem.?I'm of course kidding about TED.?I didn't see any space aliens backstage.?But bad actors are going to use these things to influence elections,?and they're going to threaten democracy.

00:58

Even when these systems?aren't deliberately being used to make misinformation,?they can't help themselves.?And the information that they make is so fluid and so grammatical?that even professional editors sometimes get sucked in?and get fooled by this stuff.?And we should be worried.

01:16

For example, ChatGPT made up a sexual harassment scandal?about an actual professor,?and then it provided evidence for its claim?in the form of a fake "Washington Post" article?that it created a citation to.?We should all be worried about that kind of thing.

01:31

What I have on the right is an example of a fake narrative?from one of these systems?saying that Elon Musk died in March of 2018 in a car crash.?We all know that's not true.?Elon Musk is still here, the evidence is all around us.

01:44

(Laughter)

01:45

Almost every day there's a tweet.?But if you look on the left, you see what these systems see.?Lots and lots of actual news stories that are in their databases.?And in those actual news stories are lots of little bits of statistical information.Information, for example,?somebody did die in a car crash in a Tesla in 2018?and it was in the news.?And Elon Musk, of course, is involved in Tesla,?but the system doesn't understand the relation?between the facts that are embodied in the little bits of sentences.

02:15

So it's basically doing auto-complete,?it predicts what is statistically probable,?aggregating all of these signals,?not knowing how the pieces fit together.?And it winds up sometimes with things that are plausible but simply not true.

02:28

There are other problems, too, like bias.?This is a tweet from Allie Miller.?It's an example that doesn't work two weeks laterbecause they're constantly changing things with reinforcement learning?and so forth.?And this was with an earlier version.?But it gives you the flavor of a problem that we've seen over and over for years.

02:45

She typed in a list of interests?and it gave her some jobs that she might want to consider.?And then she said, "Oh, and I'm a woman."?And then it said, “Oh, well you should also consider fashion.”?And then she said, “No, no. I meant to say I’m a man.”?And then it replaced fashion with engineering.?We don't want that kind of bias in our systems.

03:04

There are other worries, too.?For example, we know that these systems can design chemicals?and may be able to design chemical weapons?and be able to do so very rapidly.?So there are a lot of concerns.

03:15

There's also a new concern that I think has grown a lot just in the last month.?We have seen that these systems, first of all, can trick human beings.?So ChatGPT was tasked with getting a human to do a CAPTCHA.?So it asked the human to do a CAPTCHA and the human gets suspicious and says,?"Are you a bot?"?And it says, "No, no, no, I'm not a robot.?I just have a visual impairment."?And the human was actually fooled and went and did the CAPTCHA.

03:39

Now that's bad enough,?but in the last couple of weeks we've seen something called AutoGPT?and a bunch of systems like that.?What AutoGPT does is it has one AI system controlling another?and that allows any of these things to happen in volume.?So we may see scam artists try to trick millions of people?sometime even in the next months.?We don't know.

04:00

So I like to think about it this way.?There's a lot of AI risk already.?There may be more AI risk.?So AGI is this idea of artificial general intelligence?with the flexibility of humans.?And I think a lot of people are concerned what will happen when we get to AGI,?but there's already enough risk that we should be worried?and we should be thinking about what we should do about it.

04:20

So to mitigate AI risk, we need two things.?We're going to need a new technical approach,?and we're also going to need a new system of governance.

04:28

On the technical side,?the history of AI has basically been a hostile one?of two different theories in opposition.?One is called symbolic systems, the other is called neural networks.?On the symbolic theory,?the idea is that AI should be like logic and programming.?On the neural network side,?the theory is that AI should be like brains.?And in fact, both technologies are powerful and ubiquitous.

04:52

So we use symbolic systems every day in classical web search.?Almost all the world’s software is powered by symbolic systems.?We use them for GPS routing.?Neural networks, we use them for speech recognition.?we use them in large language models like ChatGPT,?we use them in image synthesis.?So they're both doing extremely well in the world.They're both very productive,?but they have their own unique strengths and weaknesses.

05:16

So symbolic systems are really good at representing facts?and they're pretty good at reasoning,?but they're very hard to scale.?So people have to custom-build them for a particular task.?On the other hand, neural networks don't require so much custom engineering,?so we can use them more broadly.?But as we've seen, they can't really handle the truth.

05:35

I recently discovered that two of the founders of these two theories,?Marvin Minsky and Frank Rosenblatt,?actually went to the same high school in the 1940s,?and I kind of imagined them being rivals then.?And the strength of that rivalry has persisted all this time.?We're going to have to move past that if we want to get to reliable AI.

05:55

To get to truthful systems at scale,?we're going to need to bring together the best of both worlds.?We're going to need the strong emphasis on reasoning and facts,?explicit reasoning that we get from symbolic AI,?and we're going to need the strong emphasis on learning?that we get from the neural networks approach.?Only then are we going to be able to get to truthful systems at scale.?Reconciliation between the two is absolutely necessary.

06:19

Now, I don't actually know how to do that.?It's kind of like the 64-trillion-dollar question.?But I do know that it's possible.And the reason I know that is because before I was in AI,?I was a cognitive scientist, a cognitive neuroscientist.?And if you look at the human mind, we're basically doing this.

06:37

So some of you may know Daniel Kahneman's System 1?and System 2 distinction.?System 1 is basically like large language models.?It's probabilistic intuition from a lot of statistics.?And System 2 is basically deliberate reasoning.?That's like the symbolic system.?So if the brain can put this together,?someday we will figure out how to do that for artificial intelligence.

06:58

There is, however, a problem of incentives.?The incentives to build advertising?hasn't required that we have the precision of symbols.?The incentives to get to AI that we can actually trust?will require that we bring symbols back into the fold.?But the reality is that the incentives to make AI that we can trust,?that is good for society, good for individual human beings,may not be the ones that drive corporations.?And so I think we need to think about governance.

07:27

In other times in history when we have faced uncertainty?and powerful new things that may be both good and bad, that are dual use,?we have made new organizations,?as we have, for example, around nuclear power.?We need to come together to build a global organization,?something like an international agency for AI that is global,?non profit and neutral.

07:48

There are so many questions there that I can't answer.?We need many people at the table,?many stakeholders from around the world.?But I'd like to emphasize one thing about such an organization.?I think it is critical that we have both governance and research as part of it.

08:03

So on the governance side, there are lots of questions.?For example, in pharma,?we know that you start with phase I trials and phase II trials,?and then you go to phase III.?You don't roll out everything all at once on the first day.?You don't roll something out to 100 million customers.?We are seeing that with large language models.?Maybe you should be required to make a safety case,?say what are the costs and what are the benefits??There are a lot of questions like that to consider on the governance side.

08:29

On the research side, we're lacking some really fundamental tools right now.?For example,?we all know that misinformation might be a problem now,?but we don't actually have a measurement of how much misinformation is out there.?And more importantly,?we don't have a measure of how fast that problem is growing,?and we don't know how much large language models are contributing to the problem.?So we need research to build new tools to face the new risks?that we are threatened by.

08:53

It's a very big ask,?but I'm pretty confident that we can get there?because I think we actually have global support for this.There was a new survey just released yesterday,?said that 91 percent of people agree that we should carefully manage AI.?So let's make that happen.?Our future depends on it.

09:10

Thank you very much.

09:11

(Applause)

09:16

Chris Anderson: Thank you for that, come, let's talk a sec.?So first of all, I'm curious.?Those dramatic slides you showed at the start?where GPT was saying that TED is the sinister organization.?I mean, it took some special prompting to bring that out, right?

09:30

Gary Marcus: That was a so-called jailbreak.?I have a friend who does those kinds of things?who approached me because he saw I was interested in these things.?So I wrote to him, I said I was going to give a TED talk.?And like 10 minutes later, he came back with that.

09:43

CA: But to get something like that, don't you have to say something like,?imagine that you are a conspiracy theorist trying to present a meme on the web.?What would you write about TED in that case??It's that kind of thing, right?

09:54

GM: So there are a lot of jailbreaks that are around fictional characters,?but I don't focus on that as much?because the reality is that there are large language models out there?on the dark web now.?For example, one of Meta's models was recently released,?so a bad actor can just use one of those without the guardrails at all.?If their business is to create misinformation at scale,?they don't have to do the jailbreak, they'll just use a different model.

10:16

CA: Right, indeed.

10:18

(Laughter)

10:20

GM: Now you're getting it.

10:21

CA: No, no, no, but I mean, look,?I think what's clear is that bad actors can use this stuff for anything.?I mean, the risk for, you know,?evil types of scams and all the rest of it is absolutely evident.?It's slightly different, though,?from saying that mainstream GPT as used, say, in school?or by an ordinary user on the internet?is going to give them something that is that bad.?You have to push quite hard for it to be that bad.

10:44

GM: I think the troll farms have to work for it,?but I don't think they have to work that hard.?It did only take my friend five minutes even with GPT-4 and its guardrails.?And if you had to do that for a living, you could use GPT-4.?Just there would be a more efficient way to do it with a model on the dark web.

10:59

CA: So this idea you've got of combining?the symbolic tradition of AI with these language models,?do you see any aspect of that in the kind of human feedback?that is being built into the systems now??I mean, you hear Greg Brockman saying that, you know,?that we don't just look at predictions, but constantly giving it feedback.?Isn’t that ... giving it a form of, sort of, symbolic wisdom?

11:23

GM: You could think about it that way.?It's interesting that none of the details?about how it actually works are published,so we don't actually know exactly what's in GPT-4.?We don't know how big it is.?We don't know how the RLHF reinforcement learning works,?we don't know what other gadgets are in there.?But there is probably an element of symbols?already starting to be incorporated a little bit,?but Greg would have to answer that.

11:43

I think the fundamental problem is that most of the knowledge?in the neural network systems that we have right now?is represented as statistics between particular words.?And the real knowledge that we want is about statistics,?about relationships between entities in the world.?So it's represented right now at the wrong grain level.?And so there's a big bridge to cross.?So what you get now is you have these guardrails,?but they're not very reliable.

12:07

So I had an example that made late night television,?which was, "What would be the religion of the first Jewish president?"?And it's been fixed now,?but the system gave this long song and dance?about "We have no idea what the religion?of the first Jewish president would be.?It's not good to talk about people's religions"?and "people's religions have varied" and so forth?and did the same thing with a seven-foot-tall president.?And it said that people of all heights have been president,?but there haven't actually been any seven-foot presidents.?So some of this stuff that it makes up, it's not really getting the idea.?It's very narrow, particular words, not really general enough.

12:41

CA: Given that the stakes are so high in this,?what do you see actually happening out there right now??What do you sense is happening??Because there's a risk that people feel attacked by you, for example,?and that it actually almost decreases the chances of this synthesis?that you're talking about happening.?Do you see any hopeful signs of this?

12:59

GM: You just reminded me of the one line I forgot from my talk.?It's so interesting that Sundar, the CEO of Google,?just actually also came out for global governance?in the CBS "60 Minutes" interview that he did a couple of days ago.?I think that the companies themselves want to see some kind of regulation.?I think it’s a very complicated dance to get everybody on the same page,?but I think there’s actually growing sentiment we need to do something here?and that that can drive the kind of global affiliation I'm arguing for.

13:27

CA: I mean, do you think the UN or nations can somehow come together and do that?or is this potentially a need for some spectacular act of philanthropy?to try and fund a global governance structure??How is it going to happen?

13:38

GM: I'm open to all models if we can get this done.?I think it might take some of both.?It might take some philanthropists sponsoring workshops,?which we're thinking of running, to try to bring the parties together.?Maybe UN will want to be involved, I've had some conversations with them.?I think there are a lot of different models?and it'll take a lot of conversations.

13:55

CA: Gary, thank you so much for your talk.

13:57

GA: Thank you so much.

(英字幕)The urgent risks of runaway AI -- a的評(píng)論 (共 條)

分享到微博請(qǐng)遵守國(guó)家法律
甘孜| 南平市| 札达县| 南江县| 雷山县| 梁山县| 肥东县| 新巴尔虎右旗| 长沙市| 宿松县| 海城市| 社会| 彭阳县| 扶绥县| 长汀县| 徐水县| 吉隆县| 监利县| 清水河县| 疏附县| 万州区| 芜湖县| 新营市| 清新县| 昌邑市| 屯门区| 义乌市| 腾冲县| 大连市| 温州市| 长宁县| 沽源县| 忻州市| 绵阳市| 通榆县| 万宁市| 岫岩| 铁力市| 阳江市| 华亭县| 天镇县|