【中英雙語】高情商的人工智能,可以讓人變得更聰明嗎?

Can AI Nudge Us to Make Better Choices?

The behavioral revolution in economics was triggered by a simple, haunting question: what if people don’t act rationally? This same question now vexes the technology field.
經(jīng)濟學(xué)上有一個簡單但卻繞不過去也回避不了的問題,那就是:如果人們的行為不理性時,該怎么辦?這個看似簡單的問題卻是引發(fā)經(jīng)濟學(xué)行為革命的主因。
In the online world, once expected to be a place of ready information and easy collaboration, lies and hate can spread faster than truth and kindness. Corporate systems, too, elicit irrational behavior. For example, when?predicting sales, employees often hide bad deals and selectively report the good ones.
在網(wǎng)絡(luò)世界中,謊言和仇恨可以比真相和善意傳播得更快,而網(wǎng)絡(luò)世界曾經(jīng)被認為是一個隨時準備好信息和容易合作的地方。企業(yè)制度也會引發(fā)非理性行為。例如,在預(yù)測銷售額時,員工經(jīng)常隱藏不好的交易,有選擇地報告好的交易。
AI stands at the crossroads of the behavioral question, with the potential to make matters worse or to elicit better outcomes from us. The key to better outcomes is to boost AI’s emotional quotient — its EQ. How? By training algorithms to mimic the way people behave in constructive relationships.
現(xiàn)在,人工智能站在了行為問題的十字路口,它有可能會讓人們的行為變得更加糟糕,但也有可能會變得更好。要想讓人工智能取得好的結(jié)果,最關(guān)鍵的就是提高它的情感商數(shù)——也就是情商。那么如何才能提高它的情商呢?有一種方法是運用多項運算法則去模仿人類在關(guān)系構(gòu)建中的行為方式。
Whether or not we care to admit it, we build relationships with apps. And apps, like people, can elicit both positive and negative behaviors from us. When people with high EQ interact with us, they learn our patterns, empathize with our motivations, and carefully weigh their responses. They decide to ignore, challenge, or encourage us depending on how they anticipate we will react.
不管我們承認與否,現(xiàn)在的我們已經(jīng)和多種應(yīng)用程序建立了各種關(guān)系。而應(yīng)用程序就像人一樣可以做出來自我們自己的積極和消極的行為。高情商的人與我們往來時,他們知道我們的模式,理解我們的動機,并仔細權(quán)衡他們自己該做出怎樣的反應(yīng)。是忽視我們?挑戰(zhàn)我們?或者是鼓勵我們?這些都取決于他們是如何預(yù)測我們將要發(fā)生的反應(yīng)。
AI can be trained to do the same thing. Why? Because behaviors are more predictable than we like to think. The?$70 billion?weight-loss industry thrives because diet companies know that most people regain lost weight. The?$40 billion?casino industry profits from gamblers’ illogical hope of a comeback. Credit card companies know it is hard for people to break their spending habits.
人工智能也可以被訓(xùn)練做同樣的事情。為什么呢? 因為這些行為比我們想象的更容易預(yù)測。市值700億美元的減肥行業(yè)之所以蓬勃發(fā)展,是因為減肥公司知道大多數(shù)人的體重都會反彈的。市值400億美元的賭博業(yè)知道賭徒們會一味妄圖東山再起的。他們因賭徒們這個不合邏輯的想法而從中獲利。銀行推出各種信用卡是因為他們知道人們很難改變消費習(xí)慣。
While it’s still quite early, the fields of behavioral science and machine learning already provide some promising techniques for creating higher-EQ AI that organizations are putting to work to produce better outcomes. Those techniques include:
雖然要做到這一步還為時尚早,但行為科學(xué)領(lǐng)域和機器學(xué)習(xí)領(lǐng)域已經(jīng)為創(chuàng)造高情商人工智能提供了一些很有前景的技術(shù)。很多公司也正在為此著手創(chuàng)造更好的產(chǎn)品。這些有前景的技術(shù)包括:
Noting pattern breaks and nudging.?People who know you can easily tell when you are breaking a pattern and react accordingly. For example, a friend may notice that you suddenly changed your routine and ask you why. The Bank of America online bill paying system similarly notes pattern breaks to prevent user keying errors. The system remembers the pattern of payments you’ve made in the past and posts an alert if you substantially increase your payment to a vendor.
注意慣用模式的打破并作出提醒。現(xiàn)實中,熟悉你的人往往很容易就能判斷出你慣用的模式什么時候被打破了,他們會對你模式的打破做出相應(yīng)的反應(yīng)。例如,你朋友注意到了你突然改變了日常習(xí)慣,就會問你為什么跟原來不一樣了。美國銀行的在線賬單支付系統(tǒng)就是采用這種類似的模式,以防止用戶輸入錯誤。系統(tǒng)會記住你過去的付款方式。如果某天你大幅增加了對供應(yīng)商的付款,系統(tǒng)便會發(fā)出警告。
Encouraging self-awareness with benchmarks.?Bluntly telling individuals they are performing poorly often backfires, provoking defensiveness rather than greater effort. A more diplomatic method simply allows people to see how they compare with others. For instance, a major technology firm used AI to generate more accurate sales forecasts than the sales team did. To induce the team to course-correct, the system provides each team member with personalized visualizations showing how their forecasts differ from the AI forecast. A simple nudge then inquires why this might be the case. The team member can provide a rational explanation, avoid providing feedback, or claim that the AI is incorrect. The AI learns about the substance and timing of the individual’s reaction, weighs it against the gap in the two forecasts, and can choose an appropriate second-order nudge.
用基準鼓勵員工自我反省。直言不諱地指出他人表現(xiàn)很差往往會火上澆油,這種方式不但起不到作用,還會激起他們的抵觸情緒,造成適得其反的效果。一個較為靈活的方法就是讓他們自己看到與他人相比較后的結(jié)果如何。例如,一家大型科技公司采用人工智能做出比銷售團隊更準確的銷售預(yù)測。為了糾正團隊成員的認識,該系統(tǒng)為每個團隊成員提供個性化的視覺展示,顯示他們自己的預(yù)測與人工智能的預(yù)測之間的差異。這個簡單的操作就能反映出員工出現(xiàn)這種情況的原因。員工們可以對此提供合理的解釋,避免假設(shè)性的意見,或者稱AI的信息不正確。人工智能了解了個體反應(yīng)的實質(zhì)和時間,權(quán)衡兩種預(yù)測之間的差距,然后選擇一個合適的二級推手。
Using game theory to accept or challenge conclusions.?Imagine being on a team that must find errors in over 100,000 mutual fund transactions a day. A fund managing a trillion dollars in assets is tackling this daunting problem with AI. The first version of the AI scored potential errors (called “anomalies”) by risk and potential cost, then queued the riskiest anomalies first. The system then tracked the time the analyst spent on each anomaly. It was assumed that analysts would spend more time on the risker anomalies and less time on the “no-brainers.” In fact, some analysts were flying through the riskiest anomalies, reaching suspiciously fast conclusions.
運用博弈論來接受或挑戰(zhàn)結(jié)論。試想一下,如果你所在的團隊每天必須在基金交易中找出10萬筆以上的錯誤,你該如何應(yīng)對?有一個管理萬億美元資產(chǎn)的基金正在用人工智能解決這個令人望而生畏的問題。這個人工智能最初版本是通過風(fēng)險和潛在成本來計算潛在錯誤(又稱為“異?!?,然后先找出最危險的異常。系統(tǒng)再跟蹤分析人員花費在異常方面的時間。那時的假設(shè)是分析師會在風(fēng)險異常上花更多的時間,在“容易的事情”上花更少的時間。但實際上,一些分析師對風(fēng)險最大的異常現(xiàn)象會蜻蜓點水般掠過,從而得出了讓人懷疑的快速結(jié)論。
In most massive screening systems, the rate of false positives is often extremely high. For example, secret teams from the Department of Homeland Security?found?that the TSA failed to stop 95% of inspectors’ attempts to smuggle weapons or explosive materials through screening. Mutual fund analysts scouring countless transactions, like TSA screeners dealing with thousands of passengers, their eyes glazing over, simply glide over anomalies.
很多規(guī)模很大的審查系統(tǒng)中,誤報率往往非常高。例如,美國國土安全部的一個秘密小組發(fā)現(xiàn),運輸安全管理局95%的檢查人員未能通過審查阻止武器的走私或爆炸物的出現(xiàn)。國際貨幣基金的分析師們審視著數(shù)不清的交易,他們就像美國運輸安全管理局的安檢人員處理成千上萬的乘客時一樣,雖然眼睛睜得大大的,但是對異常情況卻總是視而不見。
The fund is tackling this dangerous, though highly predictable, behavior with an algorithm employed by chess playing programs. This modified version of sequential game theory first monitors whether the analyst concludes that an anomaly is a false positive or decides to spend more time on it. The AI, playing the role of a chess opponent, can decide to counter by accepting the analyst’s decision or challenging it.
國際貨幣基金組織正在利用國際象棋程序中使用的一種算法來處理這種危險的行為。這個博弈論的修正版本首先監(jiān)視分析人員是否會認為異常是一種假象,他是否會決定要不要花費更多的時間在異常方面。國際象棋中扮演對手角色的人工智能機器就可以通過接受分析師的決定或挑戰(zhàn)來決定反擊。
Choosing the right time for insight and action.?By any standard, Jeff Bezos is a master decision maker. In a?recent interview?with Bloomberg TV’s David Rubenstein, he described his framework for making decisions. When approached about a complex decision late in the afternoon he often replies, “That doesn’t sound like a 4 o’clock decision; that sounds like a 9 o’clock [in the morning] decision.”
為洞察和行動選擇正確的時間。無論以何種標準衡量,杰夫?貝佐斯都是一位決策大師。最近在接受彭博電視記者大衛(wèi)?魯賓斯坦的采訪中,貝佐斯講述了他的決策框架。如果在下午晚些時候問他一個復(fù)雜的決定時,他通常會如此回答:“這聽起來不像是下午4點鐘的決定;聽起來像是早上9點的決定?!彼詴r間很重要。
My firm’s sales team A/B tested the right time of day to maximize responses to prospecting emails and found a dramatic difference in response rates between messages sent Tuesday morning and Friday afternoon. Many consumer messaging systems are tuned to maximize yield. The tuning algorithm can be enhanced to determine the type of decision to be made and the tendency of users to respond and make better choices. For example, decisions that need more thought could be presented at a time when the decision maker has more time to think — either through prediction or by the user’s scheduling.
我公司的銷售團隊A隊和B隊測試了一天中最合適最有效地回復(fù)潛在客戶郵件的時間,發(fā)現(xiàn)周二上午和周五下午發(fā)送郵件的回復(fù)率有很大的不同。許多消費者的信息系統(tǒng)都進行了調(diào)優(yōu)以達到最大化收益。優(yōu)化算法可以提高消費者要做出的決策類型以及做出更好選擇的趨勢。例如,需要花時間思考的決策可以在決策者時間充足的時候提出,這樣的決策要么被通過,要么就會被列入計劃之內(nèi)。
Could higher-EQ AI help bring more civility to the internet? Social media companies might do well to consider a distinction Western business people soon learn when negotiating with their Japanese counterparts — “honne” (what one feels inside) versus “tatemae” (what one publicly expresses). A shared understanding of the distinction between what one feels and what one is expected to say leads to fewer miscalculations. An algorithm based on that distinction might conceivably be developed to address the predictable tendencies of people to say and do things under the influence of crowds (even if virtual ones) that they would otherwise hesitate to do. Someone preparing an inflammatory, misleading, or cruel post might be nudged to reconsider their language or to notice the mob-like tenor of a “trending” topic. The challenges of developing such emotionally charged, high-EQ AI are daunting, but instead of simply weeding out individual posts it might ultimately be more beneficial to change online behavior for the better.
高情商的人工智能能給互聯(lián)網(wǎng)帶來更多的文明嗎?社交媒體公司最好考慮一下西方商界人士在與日本同行談判時學(xué)到的一個區(qū)別——“honne”(內(nèi)心的感受)和“tatemae”(公開表達的感受)。明白一個人的感覺和他想要說的話之間的區(qū)別能省去不少的誤判?;谶@一區(qū)別的算法可能會被開發(fā)出來,因為這有助于解決人們在他人的影響下(即使是虛擬的人群)猶豫不決想說和想做的事情的可預(yù)測傾向。想寫煽動性、誤導(dǎo)性或粗俗性的帖子的人可能會被人工智能敦促重新組織他們的語言,或者注意到熱門話題中那些鍵盤俠們。開發(fā)這種充滿感情、高情商的人工智能的挑戰(zhàn)是艱巨的,但與其簡單地刪除個別帖子,不如從根本上解決問題。
Bob Suh是OnCorps的創(chuàng)始人兼首席執(zhí)行官。OnCorps是一家致力于提高決策科學(xué)的機器學(xué)習(xí)公司。Bob Suh在加入OnCorps之前,是埃森哲公司的首席技術(shù)策略師,也是該公司全球科技業(yè)務(wù)的首席戰(zhàn)略官。
阿丫丫|譯? ? ?周強|校