最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

【龍騰網(wǎng)】人工智能:它們有時可能產(chǎn)生錯覺

2019-01-29 09:15 作者:龍騰洞觀  | 我要投稿




The passenger noticed the stop sign, but his car was speeding up and panic rushed to his heart. When he saw the train coming towards him, he called out to the driver in the front row, but he remembered that there was no driver in the train. The train crashed into the driverless car at 120 miles per hour, killing people immediately.

乘客注意到停車標(biāo),但他所乘坐的汽車在加速行駛,恐慌頓時涌上心頭。他看見前方火車正向自己這邊疾馳而來,于是大聲呼叫前排駕駛員,但他想起來車?yán)餂]有駕駛員。火車以120英里的時速撞向這輛無人駕駛汽車,當(dāng)即車毀人亡。

This is a virtual scene, but it reflects the real defects of AI mechanism. In recent years, more and more machine-generated audiovisual illusion. When the recognition system of these machines is disturbed by "noise", they will produce illusions. In the worst case, their illusions may be as dangerous as the above scenarios, and although parking signs are obvious to humans, machines fail to recognize them.

這是一個虛擬場景,但反映出人工智能機(jī)制中存在的真實缺陷。最近幾年來,機(jī)器產(chǎn)生視聽錯覺的情形越來越多。當(dāng)這些機(jī)器的識別系統(tǒng)受到“噪音”干擾時,它們會產(chǎn)生錯覺。在最糟糕的情況下,它們產(chǎn)生的錯覺可能像上述場景一樣危險,雖然停車標(biāo)志對人類來說十分顯眼,機(jī)器卻沒能識別出來。?




More recently, Athalye and his colleagues turned their attention to physical objects. By slightly tweaking the texture and colouring of these, the team could fool the AI into thinking they were something else. In one case a baseball that was misclassified as an espresso and in another a 3D-printed turtle was mistaken for a rifle. They were able to produce some 200 other examples of 3D-printed objects that tricked the computer in similar ways. As we begin to put robots in our homes, autonomous drones in our skies and self-driving vehicles on our streets, it starts to throw up some worrying possibilities.

最近,阿塔利和他的同事們把注意力轉(zhuǎn)向了實際物體。發(fā)現(xiàn)只要稍微調(diào)整一下它們的紋理和顏色,他的團(tuán)隊就可以騙過人工智能,把這些物體認(rèn)作別的東西。在一個案例中,棒球被誤認(rèn)為是一杯濃縮咖啡,而在另一個案例中,3D打印的海龜被誤認(rèn)為是步槍。還有其他例子,他們制造了約200個3D打印物體,這些物體以類似的方式欺騙了電腦。今天當(dāng)我們開始在家里使用機(jī)器人、在空中運用自動駕駛無人機(jī)、在街道上行駛自動駕駛汽車時,機(jī)器人的這種誤覺開始拋出一些令人擔(dān)憂的可能性。

“At first this started off as a curiosity,” says Athalye. “Now, however, people are looking at it as a potential security issue as these systems are increasingly being deployed in the real world.”

阿塔利說,“起初,這只是一種好奇,然而,隨著這些智能系統(tǒng)越來越多地部署在現(xiàn)實世界中,人們正將其視為一個潛在的安全問題。”



To Carlini, such adversarial examples “conclusively prove that machine learning has not yet reached human ability even on very simple tasks”.

在卡里尼看來,這些對抗性的例子“最終證明,哪怕在非常簡單的任務(wù)上,機(jī)器學(xué)習(xí)也沒有達(dá)到人類的能力”。

Under the skin

內(nèi)在原理

Neural networks are loosely based on how the brain processes visual information and learns from it. Imagine a young child learning what a cat is: as they encounter more and more of these creatures, they will start noticing patterns – that this blob called a cat has four legs, soft fur, two pointy ears, almond shaped eyes and a long fluffy tail. Inside the child’s visual cortex (the section of the brain that processes visual information), there are successive layers of neurons that fire in response to visual details, such as horizontal and vertical lines, enabling the child to construct a neural ‘picture’ of the world and learn from it.

人工神經(jīng)網(wǎng)絡(luò)是大致模仿大腦(即生物神經(jīng)網(wǎng)絡(luò))處理視覺信息的功能并從中學(xué)習(xí)方法。想象一個小孩正在學(xué)習(xí)認(rèn)識貓是什么東西:當(dāng)他們見到這種動物的次數(shù)越來越多時,就會開始注意到這種動物的一些固定模式,發(fā)現(xiàn)這團(tuán)叫做貓的東西有四條腿,有柔軟的皮毛、兩只尖耳朵、杏仁狀的眼睛和一條毛茸茸的長尾巴。在兒童的視覺皮層(大腦中處理視覺信息的區(qū)域)內(nèi),多層神經(jīng)元會對視覺細(xì)節(jié)做出反應(yīng),如水平和垂直的線條,使兒童能夠構(gòu)建一幅世界的神經(jīng)“圖畫”,并從中學(xué)習(xí)視覺識別。

Neural networks work in a similar way. Data flows through successive layers of artificial neurons until after being trained on hundreds or thousands of examples of the same thing (usually labelled by a human), the network starts to spot patterns which enable it to predict what it is viewing. The most sophisticated of these systems employ ‘deep-learning’ which means they possess more of these layers.

神經(jīng)網(wǎng)絡(luò)的工作原理與此類似,獲取的數(shù)據(jù)通過多層人工神經(jīng)元網(wǎng)絡(luò)傳輸進(jìn)行信息處理,在接受到成百上千個相同物體的樣本(通常由人類標(biāo)記)的訓(xùn)練之后,神經(jīng)網(wǎng)絡(luò)開始建立此物體的視覺識別模式,從而能夠在其后認(rèn)得出正在觀看的東西是這種物體。其中最復(fù)雜的系統(tǒng)采用“深度學(xué)習(xí)”,這意味著需要擁有更多的信息處理層。



“Definitely it is a step in the right direction,” says Madry. While this approach does seem to make frameworks more robust, it probably has limits as there are numerous ways you could tweak the appearance of an image or object to generate confusion.

麥德里說, “這無疑是朝著正確方向邁出的一步?!彪m然這種方法看起來確實使框架更加強(qiáng)大,但也可能有一些限制,因為有許多方法可以改變圖像或物體的外觀從而產(chǎn)生混淆。

A truly robust image classifier would replicate what ‘similarity’ means to a human: it would understand that a child’s doodle of a cat represents the same thing as a photo of a cat and a real-life moving cat. Impressive as deep learning neural networks are, they are still no match for the human brain when it comes to classifying objects, making sense of their environment or dealing with the unexpected.

一個真正強(qiáng)大的圖像分類器會復(fù)制"相似性"對人類的作用,因而可以認(rèn)出一個孩子涂鴉的貓和一張貓的照片以及一只現(xiàn)實生活中移動的貓代表的是同一樣?xùn)|西。盡管深度學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)令人印象深刻,但在對物體進(jìn)行分類、感知周遭環(huán)境或處理突發(fā)事件方面,仍無法與人腦匹敵。

If we want to develop truly intelligent machines that can function in real world scenarios, perhaps we should go back to the human brain to better understand how it solves these issues.

如果我們想要開發(fā)出能夠在現(xiàn)實世界中發(fā)揮作用的真正智能機(jī)器,或許我們應(yīng)該回到人腦上來,更好地理解人腦是如何解決這些問題的。

Binding problem

捆綁問題



In their desire to keep things simple, engineers building artificial neural frameworks have ignored several properties of real neurons – the importance of which is only beginning to become clear. Neurons communicate by sending action potentials or ‘spikes’ down the length of their bodies, which creates a time delay in their transmission. There’s also variability between individual neurons in the rate at which they transmit information – some are quick, some slow. Many neurons seem to pay close attention to the timing of the impulses they receive when deciding whether to fire themselves.

為了簡單易行,構(gòu)建當(dāng)代人工神經(jīng)框架的工程師忽略了真實人腦神經(jīng)元的一些特性,而科技界才剛剛開始明白這些特性非常重要。神經(jīng)元通過將動作電位(action potentials)或“峰電位”(spikes)信號發(fā)送到身體的各個部位來進(jìn)行交流,這就造成了神經(jīng)元傳輸?shù)臅r間延遲。個體神經(jīng)元之間在傳遞信息的速度上也有差異,有些快,有些慢。許多神經(jīng)元在決定是否放電時,似乎會密切關(guān)注它們接收到的脈沖的時機(jī)。

“Artificial neural networks have this property that all neurons are exactly the same, but the variety of morphologically different neurons in the brain suggests to me that this is not irrelevant,” says Jeffrey Bowers, a neuroscientist at the University of Bristol who is investigating which aspects of brain function aren’t being captured by current neural networks.

“人工神經(jīng)網(wǎng)絡(luò)有這個屬性,即所有神經(jīng)元完全相同,但大腦中的神經(jīng)元卻有不同形態(tài),這讓我意識到,人腦神經(jīng)元的差異性不是無關(guān)緊要的,”布里斯托大學(xué)(University of Bristol)的神經(jīng)系統(tǒng)科學(xué)家鮑爾斯(Jeffrey Bowers)說。他正在調(diào)查大腦哪些方面的功能未被當(dāng)前人工神經(jīng)網(wǎng)絡(luò)所采用。



“Our hypothesis is that the feature binding representations present in the visual brain, and replicated in our biological spiking neural networks, may play an important role in contributing to the robustness of biological vision, including the recognition of objects, faces and human behaviours,” says Stringer.

斯特林格說,“我們的假設(shè)是,視覺大腦中呈現(xiàn)的捆綁特征,以及在我們的生物強(qiáng)化神經(jīng)網(wǎng)絡(luò)中的復(fù)制,可能在增強(qiáng)生物視覺的穩(wěn)健性方面發(fā)揮重要作用,包括對物體、面孔和人類行為的識別?!?br/>
Stringer’s team is now seeking evidence for the existence of such neurons in real human brains. They are also developing ‘hybrid’ neural networks that incorporate this new information to see if they produce a more robust form of machine learning.

斯特林格的研究小組目前正在尋找證據(jù),證明真實的人類大腦中存在這樣的神經(jīng)元。他們還在開發(fā)“混合”神經(jīng)網(wǎng)絡(luò),將這些新信息結(jié)合進(jìn)人工神經(jīng)網(wǎng)絡(luò),看看是否能產(chǎn)生一種更強(qiáng)大的機(jī)器學(xué)習(xí)形式。

“Whether this is what happens in the real brain is unclear at this point, but it is certainly intriguing, and highlights some interesting possibilities,” says Bowers.

鮑爾斯說, “目前還不清楚這是否在真的大腦中發(fā)生,但這確實很吸引人,并突出了一些有趣的可能性。”



“It is becoming ever clearer that the way the brain works is quite different to how our existing deep learning models work,” he says. “So, this indeed might end up being a completely different path to achieving success. It is hard to say how viable it is and what the timeframe needed to achieve success here is.”

他說,“越來越清楚的是,大腦的工作方式與我們現(xiàn)有的機(jī)器深度學(xué)習(xí)模式非常不同,因此,最終可能會走上一條完全不同的路才能成功。很難說可行性有多大,以及取得成功需要多長時間?!?br/>
In the meantime, we may need to avoid placing too much trust in the AI-powered robots, cars and programmes that we will be increasingly exposed to. You just never know if it might be hallucinating.

與此同時,對于越來越多人工智能驅(qū)動的機(jī)器人、汽車和程序,我們可能需要避免對其過于信任。因為你永遠(yuǎn)不知道人工智能是不是正在產(chǎn)生被誤導(dǎo)的視覺。

【龍騰網(wǎng)】人工智能:它們有時可能產(chǎn)生錯覺的評論 (共 條)

分享到微博請遵守國家法律
盐亭县| 兰考县| 且末县| 邵武市| 玉环县| 皋兰县| 汉中市| 湟中县| 石河子市| 民和| 井陉县| 肥城市| 赣州市| 贵定县| 自贡市| 高州市| 合肥市| 巩义市| 平利县| 加查县| 云阳县| 崇义县| 来凤县| 锡林浩特市| 林州市| 赤峰市| 搜索| 罗江县| 神木县| 谷城县| 本溪市| 博野县| 大埔县| 辛集市| 汝南县| 买车| 泰安市| 淅川县| 哈巴河县| 巨野县| 临潭县|