最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會員登陸 & 注冊

外刊精讀42《The Economist》失業(yè)還是解放? 人工智能的雙重革...

2023-02-11 18:07 作者:月上星辰2018  | 我要投稿

Business|Bartleby

Feb 2nd 2023 | 786 words

The relationship between AI and humans

人類與AI的關(guān)系

What questions do technologies like ChatGPT raise for employees and customers?

像ChatGPT這樣的技術(shù)會給員工和客戶帶來什么問題?

raise for=pose for給…帶來…

If you ask something of ChatGPT, an artificial-intelligence (AI) tool that is all the rage, the responses you get back are almost instantaneous, utterly certain and often wrong. It is a bit like talking to an economist. The questions raised by technologies like ChatGPT yield much more tentative answers. But they are ones that managers ought to start asking.

如果你問ChatGPT一些問題,一種非常流行的人工智能(AI)工具,你得到的回答幾乎是即時的,完全肯定的,而且常常是錯誤的。這有點像與經(jīng)濟學家交談。ChatGPT等技術(shù)提出的問題給出了更多的初步答案。但管理者應該開始詢問這些問題。

One issue is how to deal with employees’ concerns about job security. Worries are natural. An AI that makes it easier to process your expenses is one thing; an AI that people would prefer to sit next to at a dinner party quite another. Being clear about how workers would redirect time and energy that is freed up by an AI helps foster acceptance. So does creating a sense of agency: research conducted by?MIT Sloan Management Review?and the Boston Consulting Group found that an ability to override an AI makes employees more likely to use it.

一個問題是如何處理員工對工作安全的擔憂。擔憂是自然的。人工智能讓你更容易處理開支是一回事;在晚宴上,人們更愿意坐在旁邊的人工智能是另一回事。明確員工如何重新分配人工智能釋放的時間和精力,有助于培養(yǎng)員工的接受度。創(chuàng)造一種主人翁感也是如此:麻省理工斯隆管理評論和波士頓咨詢集團進行的研究發(fā)現(xiàn),超越人工智能的能力使員工更有可能使用人工智能。

Whether people really need to understand what is going on inside an AI is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to. But a piece of research by academics at Harvard University, the Massachusetts Institute of Technology and the Polytechnic University of Milan suggests that too much explanation can be a problem.

人們是否真的需要了解AI的原理還有待商榷。直覺上,能夠了解算法的原理應該好過不能。但是哈佛大學、麻省理工學院和米蘭理工大學的學者進行的一項研究表明,過多的理解可能是一個問題。

Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they could not fathom, however, because of their confidence in the expertise of people who had built it. The credentials of those behind an AI matter.

奢侈品集團泰佩思琦的員工可以訪問一個預測模型,告訴他們?nèi)绾螢樯痰攴峙鋷齑?。一些人使用了一個可以理解其邏輯的模型;其他人使用的模型更像是一個內(nèi)部結(jié)構(gòu)不詳?shù)暮谙洹J聦嵶C明,工人更有可能否決他們能夠理解的模型,因為他們錯誤地相信自己的直覺。然而,工人們愿意接受他們無法理解的模型的決定,因為他們對建造它的人的專業(yè)知識有信心。AI背后的資格證明很重要。

The different ways that people respond to humans and to algorithms is a burgeoning area of research. In a recent paper Gizem Yalcin of the University of Texas at Austin and her co-authors looked at whether consumers responded differently to decisions—to approve someone for a loan, for example, or a country-club membership—when they were made by a machine or a person. They found that people reacted the same when they were being rejected. But they felt less positively about an organisation when they were approved by an algorithm rather than a human. The reason? People are good at explaining away unfavourable decisions, whoever makes them. It is harder for them to attribute a successful application to their own charming, delightful selves when assessed by a machine. People want to feel special, not reduced to a data point.

人們對人類和算法的不同反應是一個新興的研究領(lǐng)域。在最近的一篇論文中,德克薩斯大學奧斯丁分校的吉澤姆?亞爾琴和她的合著者研究了當決定由機器或人做出時,消費者是否會有不同的反應—— 例如,批準某人貸款或鄉(xiāng)村俱樂部會員資格。他們發(fā)現(xiàn)當被拒絕時,人們的反應是一樣的。但當他們被算法而不是人類認可時,他們對一個組織的感覺就不那么積極了。原因是什么?人們善于為不利的決定辯解,不管是誰做出的決定。當被機器評估時,他們很難將成功的申請歸因于自己迷人、令人愉快的自我。人們想要感覺特別,而不是被簡化成一個數(shù)據(jù)點。

In a forthcoming paper, meanwhile, Arthur Jago of the University of Washington and Glenn Carroll of the Stanford Graduate School of Business investigate how willing people are to give rather than earn credit—specifically for work that someone did not do on their own. They showed volunteers something attributed to a specific person—an artwork, say, or a business plan—and then revealed that it had been created either with the help of an algorithm or with the help of human assistants. Everyone gave less credit to producers when they were told they had been helped, but this effect was more pronounced for work that involved human assistants. Not only did the participants see the job of overseeing the algorithm as more demanding than supervising humans, but they did not feel it was as fair for someone to take credit for the work of other people.

與此同時,華盛頓大學的Arthur Jago和斯坦福大學商學院的Glenn Carroll 在一篇即將發(fā)表的論文中調(diào)查了人們有多愿意給予而不是贏得稱贊 —— 特別是對于那些不是他們自己做的工作。他們向志愿者展示某個特定人物的作品 —— 比如一件藝術(shù)品,或者一份商業(yè)計劃書 —— 然后告訴他們這些作品是在算法或者人工助手的幫助下創(chuàng)作出來的。當志愿者被告知原作者得到了幫助時,每個人對他們的稱贊減少了,但這種影響在涉及人類助手的工作中更加明顯。參與者不僅認為監(jiān)督算法的工作比監(jiān)督人類的工作要求更高,而且他們也不認為有人拿其他人的工作成果來邀功是公平的。

Another paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, examines whether AIs or humans are more effective at helping people lose weight. The authors looked at the weight loss achieved by subscribers to an Indian mobile app, some of whom used only an AI coach and some of whom used a human coach, too. They found that people who also used a human coach lost more weight, set themselves tougher goals and were more fastidious about logging their activities. But people with a higher body mass index did not do as well with a human coach as those who weighed less. The authors speculate that heavier people might be more embarrassed by interacting with another person.

印度管理學院艾哈邁達巴德分校的Anuj Kapoor和他的合著者的另一篇論文研究了AI和人類在幫助人們減肥方面是否更有效。作者觀察了一個印度移動App用戶實現(xiàn)的減肥效果,其中一些人只使用AI教練,另一些人也使用人工教練。他們發(fā)現(xiàn),同時使用人工教練的人減掉了更多的體重,為自己設(shè)定了更嚴格的目標,并且對記錄他們的活動更加嚴謹。但是BMI指數(shù)較高的人在人工教練指導下的表現(xiàn)不如體重較輕的人。作者推測,體重較重的人在與他人互動時可能會更加尷尬。

The picture that emerges from such research is messy. It is also dynamic: just as technologies evolve, so will attitudes. But it is crystal-clear on one thing. The impact of ChatGPT and other AIs will depend not just on what they can do, but also on how they make people feel.

這類研究呈現(xiàn)出的圖景是混亂的。它也是動態(tài)的:隨著技術(shù)的發(fā)展,人們對AI的態(tài)度也會發(fā)生變化。但有一點是非常清楚的。ChatGPT和其他AI的影響不僅取決于它們能做什么,還取決于它們給人的感覺。





外刊精讀42《The Economist》失業(yè)還是解放? 人工智能的雙重革...的評論 (共 條)

分享到微博請遵守國家法律
大石桥市| 新乐市| 张家口市| 太康县| 星子县| 昂仁县| 宣化县| 平武县| 会宁县| 新余市| 琼海市| 邵阳市| 祁东县| 高淳县| 南开区| 龙陵县| 临沭县| 保德县| 安平县| 湖南省| 神农架林区| 镇雄县| 油尖旺区| 芜湖市| 铁岭县| 平昌县| 南汇区| 佛教| 大方县| 安庆市| 探索| 临泽县| 都匀市| 罗山县| 南丰县| 东乌| 梁平县| 房山区| 息烽县| 湖南省| 济南市|