外刊聽讀| 經(jīng)濟(jì)學(xué)人 人類與AI的關(guān)系

Bartleby
巴特爾比
Machine learnings
機(jī)器學(xué)習(xí)
How do employees and customers feel about artificial intelligence?
員工和用戶對AI的感受如何?
?
IF YOU ASK something of ChatGPT, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong. It is a bit like talking to an economist. The questions raised by technologies like ChatGPT yield much more tentative answers. But they are ones that managers ought to start asking.
?
如果你向ChatGPT這個(gè)風(fēng)靡一時(shí)的人工智能(AI)工具詢問一些事情,你得到的回答幾乎是即時(shí)的、完全肯定的,而且往往是錯(cuò)誤的。這有點(diǎn)像與經(jīng)濟(jì)學(xué)家交談。像ChatGPT這樣的技術(shù)所產(chǎn)生的問題有著更多試探性的答案。但這些是經(jīng)理們應(yīng)該開始思考的。
?
One issue is how to deal with employees’ concerns about job security. Worries are natural. An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another. Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance. So does creating a sense of agency: research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.
?
一個(gè)問題是如何處理員工對工作保障的擔(dān)憂。擔(dān)憂是自然的。一個(gè)讓你更容易處理開支的AI是一回事;一個(gè)人們在晚宴上更愿意坐在它旁邊的AI則完全不同。清楚員工將如何重新分配AI所釋放的時(shí)間和精力有助于培養(yǎng)員工接受度。創(chuàng)造人類的主人翁感也是如此: 麻省理工學(xué)院斯隆管理學(xué)院評論和波士頓咨詢公司進(jìn)行的研究發(fā)現(xiàn),保有駕馭AI的能力使員工更有可能使用它。
?
Whether people really need to understand what is going on inside an AI is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to. But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.
?
人們是否真的需要了解AI的原理還有待商榷。直覺上,能夠了解算法的原理應(yīng)該好過不能。但是哈佛大學(xué)、麻省理工學(xué)院和米蘭理工大學(xué)的學(xué)者進(jìn)行的一項(xiàng)研究表明,過多的理解可能是一個(gè)問題。
?
Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it. The credentials of those behind an AI matter.
?
奢侈品集團(tuán)泰佩思琦的員工可以訪問一個(gè)預(yù)測模型,告訴他們?nèi)绾螢樯痰攴峙鋷齑?。一些人使用了一個(gè)可以理解其邏輯的模型;其他人使用的模型更像是一個(gè)內(nèi)部結(jié)構(gòu)不詳?shù)暮谙?。事?shí)證明,工人更有可能否決他們能夠理解的模型,因?yàn)樗麄冨e(cuò)誤地相信自己的直覺。然而,工人們愿意接受他們無法理解的模型的決定,因?yàn)樗麄儗ㄔ焖娜说膶I(yè)知識(shí)有信心。AI背后的資格證明很重要。
?
The different ways that people respond to humans and to algorithms is a burgeoning area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person. They found that people reacted the same when they were being rejected. But they felt less positively about an organisation when they were approved by an algorithm rather than a human. The reason? People are good at explaining away unfavourable decisions, whoever makes them. It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine. People want to feel special, not reduced to a data point.
?
人們對人類和算法的不同反應(yīng)是一個(gè)新興的研究領(lǐng)域。在最近的一篇論文中,德克薩斯大學(xué)奧斯丁分校的吉澤姆·亞爾琴和她的合著者研究了當(dāng)決定由機(jī)器或人做出時(shí),消費(fèi)者是否會(huì)有不同的反應(yīng)——例如,批準(zhǔn)某人貸款或鄉(xiāng)村俱樂部會(huì)員資格。他們發(fā)現(xiàn)當(dāng)被拒絕時(shí),人們的反應(yīng)是一樣的。但當(dāng)他們被算法而不是人類認(rèn)可時(shí),他們對一個(gè)組織的感覺就不那么積極了。原因是什么?人們善于為不利的決定辯解,不管是誰做出的決定。當(dāng)被機(jī)器評估時(shí),他們很難將成功的申請歸因于自己迷人、令人愉快的自我。人們想要感覺特別,而不是被簡化成一個(gè)數(shù)據(jù)點(diǎn)。
?
In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own. They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants. Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants. Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.
?
與此同時(shí),華盛頓大學(xué)的Arthur Jago和斯坦福大學(xué)商學(xué)院的Glenn Carroll在一篇即將發(fā)表的論文中調(diào)查了人們有多愿意給予而不是贏得稱贊——特別是對于那些不是他們自己做的工作。他們向志愿者展示某個(gè)特定人物的作品——比如一件藝術(shù)品,或者一份商業(yè)計(jì)劃書——然后告訴他們這些作品是在算法或者人工助手的幫助下創(chuàng)作出來的。當(dāng)志愿者被告知原作者得到了幫助時(shí),每個(gè)人對他們的稱贊減少了,但這種影響在涉及人類助手的工作中更加明顯。參與者不僅認(rèn)為監(jiān)督算法的工作比監(jiān)督人類的工作要求更高,而且他們也不認(rèn)為有人拿其他人的工作成果來邀功是公平的。
?
Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight. The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AI coach and some of whom used a human coach, too. They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities. But people with a higher body mass index did not do as well with a human coach as those who weighed less. The authors speculate that heavier people might be more embarrassed by interacting with another person.
?
印度管理學(xué)院艾哈邁達(dá)巴德分校的Anuj Kapoor和他的合著者的另一篇論文研究了AI和人類在幫助人們減肥方面是否更有效。作者觀察了一個(gè)印度移動(dòng)App用戶實(shí)現(xiàn)的減肥效果,其中一些人只使用AI教練,另一些人也使用人工教練。他們發(fā)現(xiàn),同時(shí)使用人工教練的人減掉了更多的體重,為自己設(shè)定了更嚴(yán)格的目標(biāo),并且對記錄他們的活動(dòng)更加嚴(yán)謹(jǐn)。但是BMI指數(shù)較高的人在人工教練指導(dǎo)下的表現(xiàn)不如體重較輕的人。作者推測,體重較重的人在與他人互動(dòng)時(shí)可能會(huì)更加尷尬。
?
The picture that emerges from such research is messy. It is also dynamic: just as technologies evolve, so will attitudes. But it is crystal-clear on one thing. The impact of ChatGPT and other AIs will depend not just on what they can do, but also on how they make people feel.
?
這類研究呈現(xiàn)出的圖景是混亂的。它也是動(dòng)態(tài)的:隨著技術(shù)的發(fā)展,人們對AI的態(tài)度也會(huì)發(fā)生變化。但有一點(diǎn)是非常清楚的。ChatGPT和其他AI的影響不僅取決于它們能做什么,還取決于它們給人的感覺。














