經(jīng)濟(jì)學(xué)人 | Machine learnings(ChatGPT)機(jī)器學(xué)習(xí)(20


Machine learnings(ChatGPT)
機(jī)器學(xué)習(xí)
What questions do technologies like ChatGPT raise for employees and customers?
像ChatGPT這樣的技術(shù)給員工和客戶帶來(lái)了什么問題?
Feb 2nd 2023
If you ask something of ChatGPT, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong. It is a bit like talking to an economist. The questions raised by technologies like ChatGPT yield much more tentative answers. But they are ones that managers ought to start asking.
如果你向風(fēng)靡一時(shí)的人工智能工具ChatGPT詢問一些問題,你得到的回答幾乎是即時(shí)的、完全確定的,而且經(jīng)常是錯(cuò)誤的。這有點(diǎn)像和經(jīng)濟(jì)學(xué)家談話。ChatGPT等技術(shù)帶來(lái)的問題產(chǎn)生了更多的初步答案。但管理者們應(yīng)該開始問這些問題。
One issue is how to deal with employees’ concerns about job security. Worries are natural. An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another. Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance. So does creating a sense of agency: research conducted by MIT Sloan Management Review and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.
其中一個(gè)問題是如何處理員工對(duì)工作保障的擔(dān)憂。擔(dān)心是很自然的。人工智能讓你更容易處理你的開支,這是一回事; 在晚宴上,人們更愿意坐在人工智能旁邊則完全是另一回事。明確員工將如何重新分配人工智能所釋放出來(lái)的時(shí)間和精力,有助于培養(yǎng)員工的接受度。創(chuàng)造一種代理感也是如此:《麻省理工斯隆管理評(píng)論》和波士頓咨詢集團(tuán)進(jìn)行的研究發(fā)現(xiàn),擁有手動(dòng)控制人工智能的能力讓員工更有可能使用人工智能。
Whether people really need to understand what is going on inside an AI is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to. But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.
人們是否真的需要了解人工智能內(nèi)部發(fā)生了什么,尚不清楚。直觀地說,能夠遵循算法的推理比不遵循好。但哈佛大學(xué)、麻省理工學(xué)院和米蘭理工大學(xué)的學(xué)者們的一項(xiàng)研究表明,過多的解釋可能是個(gè)問題。
Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it. The credentials of those behind an AI matter.
奢侈品牌投資組合泰佩思琦的員工可以使用一個(gè)預(yù)測(cè)模型,讓模型告訴他們?nèi)绾蜗蜷T店分配庫(kù)存。一些人使用了一個(gè)邏輯可以被解釋的模型; 其他人使用的模型更像是一個(gè)黑箱。事實(shí)證明,員工們更有可能否決他們能理解的模型,因?yàn)樗麄冨e(cuò)誤地相信自己的直覺。然而,員工們?cè)敢饨邮芤粋€(gè)他們無(wú)法理解的模型的決定,因?yàn)樗麄儗?duì)構(gòu)建模型的人的專業(yè)知識(shí)有信心。人工智能背后的人的資歷很重要。
The different ways that people respond to humans and to algorithms is a burgeoning area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person. They found that people reacted the same when they were being rejected. But they felt less positively about an organisation when they were approved by an algorithm rather than a human. The reason? People are good at explaining away unfavourable decisions, whoever makes them. It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine. People want to feel special, not reduced to a data point.
人們對(duì)人類和算法的不同反應(yīng)是一個(gè)新興的研究領(lǐng)域。在最近的一篇論文中,德克薩斯大學(xué)奧斯汀分校的Gizem Yalcin和她的共同作者研究了消費(fèi)者對(duì)由機(jī)器或人做出的決定(例如批準(zhǔn)某人貸款或鄉(xiāng)村俱樂部會(huì)員資格)的反應(yīng)是否不同。他們發(fā)現(xiàn),當(dāng)人們被拒絕時(shí),他們的反應(yīng)是一樣的。但當(dāng)他們被算法而非人類批準(zhǔn)時(shí),他們對(duì)一家組織的感覺就不那么積極了。為什么呢? 人們善于為不利的決定辯解,不管是誰(shuí)做的決定。當(dāng)機(jī)器對(duì)他們進(jìn)行評(píng)估時(shí),他們更難把成功的申請(qǐng)歸功于他們自身迷人且令人愉快的個(gè)性。人們希望自己是特別的,而不是被簡(jiǎn)化成一個(gè)數(shù)據(jù)點(diǎn)。
In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own. They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants. Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants. Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.
與此同時(shí),在一篇即將發(fā)表的論文中,華盛頓大學(xué)的Arthur Jago和斯坦福商學(xué)院的Glenn Carroll調(diào)查了人們稱贊他人的意愿,特別是那些不是自己做的工作。他們向志愿者展示了一些屬于某個(gè)人的東西,比如一件藝術(shù)品或一份商業(yè)計(jì)劃,然后告訴他們這些東西是在算法的幫助下或在人類助手的幫助下創(chuàng)作的。當(dāng)被告知有人幫忙時(shí),每個(gè)人對(duì)制作人的認(rèn)可度都降低了,但這種影響在有人類助手的工作中更為明顯。參與者不僅認(rèn)為監(jiān)督算法的工作比監(jiān)督人類的工作要求更高,而且他們認(rèn)為有人把別人的工作歸功于自己是不公平的。
Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight. The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AIcoach and some of whom used a human coach, too. They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities. But people with a higher body mass index did not do as well with a human coach as those who weighed less. The authors speculate that heavier people might be more embarrassed by interacting with another person.
另一篇由印度管理學(xué)院艾哈邁達(dá)巴德分校的Anuj Kapoor及其共同作者撰寫的論文,該論文研究了人工智能和人類在幫助人們減肥方面哪個(gè)更有效。作者研究了一款印度移動(dòng)應(yīng)用的訂閱者的減肥效果,其中一些人只使用AI教練,另一些人也使用真人教練。他們發(fā)現(xiàn),使用真人教練的人體重減輕得更多,給自己設(shè)定了更嚴(yán)格的目標(biāo),對(duì)記錄自己的活動(dòng)也更挑剔。但BMI較高的人在真人教練的指導(dǎo)下表現(xiàn)不如體重較輕的人。作者推測(cè),體重較重的人在與他人交流時(shí)可能會(huì)感到更尷尬。
The picture that emerges from such research is messy. It is also dynamic: just as technologies evolve, so will attitudes. But it is crystal-clear on one thing. The impact of ChatGPT and other AIs will depend not just on what they can do, but also on how they make people feel.
從這類研究中得出的結(jié)論是混亂的。它也是動(dòng)態(tài)的: 正如技術(shù)進(jìn)步一樣,態(tài)度也會(huì)‘進(jìn)步’。不過有一件事是非常清楚的。ChatGPT和其他人工智能的影響不僅取決于它們能做什么,還取決于它們給人們帶來(lái)的感覺。