最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊(cè)

Why we get conned and how to avoid it?

2023-07-22 08:05 作者:AyoSeki  | 我要投稿


Transcript

Kim Mills: From headline-grabbing Ponzi schemes to email phishing identity thieves to fake news stories circulating on social media, it can sometimes seem like the world is full of people who want to deceive us. What many of these scammers and con artists have in common is that they take advantage of the patterns of thinking and the mental shortcuts that we all use in our daily lives. Most of the time, these mental habits serve us well. After all, most people aren’t trying to trick us, and it would be difficult to go through life skeptical of everything and everyone. But the same habits of trust and belief that work well most of the time can leave us vulnerable when we encounter a con artist, a false news story, or some other type of deceit.


What are the cognitive habits that put us at risk of believing lies and falling prey to scams? Are some people simply more gullible than others or can anyone be conned? When you’re offered something that sounds like an amazing opportunity, what questions should you ask to figure out if it’s too good to be true? How common is fraud in the worlds of business, science and elsewhere? Is technology, including artificial intelligence, making it easier for people to cheat and lie and to get away with it? What can we do to protect ourselves and our wallets by spotting scammers before it’s too late?


Welcome to Speaking of Psychology, the flagship podcast of the American Psychological Association that examines the links between psychological science and everyday life. I’m Kim Mills.


I have two guests today, co-authors of the new book, Nobody’s Fool: Why We Get Taken In and What We Can Do About It. First is Dr. Daniel Simons, a professor of psychology and head of the Visual Cognition Laboratory at the University of Illinois Urbana Champaign, where he also has courtesy appointments in the department of advertising and the College of Business. Dr. Simons’ research explores the limits of human awareness and memory, the reasons we’re often unaware of those limits and the implications of those limits for our personal and professional lives.


Next is Dr. Christopher Chabris, a professor and co-director of the Behavioral Decision Sciences program at the Geisinger Health System. Dr. Chabris is a cognitive scientist whose research focuses on decision-making, attention, intelligence, and behavior genetics.


Dr. Simons and Dr. Chabris were also co-authors of the 2010 New York Times bestseller, The Invisible Gorilla: How Our Intuitions Deceive Us. They’ve collaborated on research for more than 25 years and spent nearly a decade working on their new book, which explores how scammers, con artists, and other liars fool us and what we can do to avoid being taken in. Thank you both for joining me today.


Daniel Simons, PhD: Thanks for having us on.


Christopher Chabris, PhD: Great to be here.


Mills: You start your book with a quote from James Mattis, the former U.S. defense secretary who said, “Once in a while we can all be fooled by something.” He was talking about the fact that he served on the board of the disgraced company Theranos before it was revealed as a fraud. Let’s start there. Why is it that we’re all vulnerable to being deceived at least some of the time?


Simons: I think we tend to think that only the most gullible among us can be deceived, and there’s good reason for that. Whenever you hear about a scam or you hear about somebody falling for a con, in retrospect, it seems obvious and you can think to yourself, “Oh yeah, I wouldn’t have fallen for that.” But most scams aren’t targeting you. Right? They’re targeting somebody. When a scam is really targeted to you in particular—that meets your desires, your wishes, your wants—you’re more likely to fall for it than if it’s not. We all can be subject to being deceived if the targeting is aimed at us and it takes advantage of the way we think and the information we’re looking for.


Mills: Dr. Chabris, let me ask you, what are some of the most common cognitive shortcuts or thinking patterns that get us into trouble?


Chabris: I think the first one that we talk about is truth bias. So truth bias is the idea that our default tendency is to think that whatever we hear or read or encounter is true, that people are not always trying to lie to us or deceive us. That’s sort of a precondition for being taken in by any kind of scam or fraud because if you believed everything you saw was false or questionable or misleading, then you wouldn’t really act on those offers and those promises and those sales pitches and those marketing messages. And truth bias is important for us to have because if we didn’t tend to accept what other people said, we’d never be able to make plans with them, have a conversation with them, or do anything really, other than be skeptical all the time.


Once you start with truth bias, then other cognitive habits we have can get us into further trouble. I guess one I would highlight to start with is our habit of focus, which refers to our tendency to focus on information that’s right in front of us, that we have easily at hand—often, that someone has provided to us—and make decisions based only on that information, operating essentially under the assumption that’s all that matters or that’s all there is. One of the main ways to avoid being deceived is to think about information that people aren’t showing you or information that’s missing or has been withheld or you’re just not thinking about broadening your focus to include other information besides perhaps what the scam or a con artist wants you to be focusing on at that particular moment.


Mills: It’s especially easy to fool people when you tell them something that they want to believe anyway, or something that matches their preconceived notion. How can we fall into the trap of not being willing to reexamine our own beliefs?


Simons: This is a general issue. We tend to make predictions about the world. We have to. We have to make expectations for what’s going to happen and we can act on those. Most of the time, that’s a great thing to do, and that’s true for most of these cognitive tendencies. They generally work really well for us. It’s only when somebody is hijacking them to take advantage of us that we run into problems. The problem with predictions and expectations is that when somebody gives us exactly what we’re looking for, we don’t tend to question it as much as we would if somebody told us the opposite of what we believe. You’re much more likely to forward that story on Facebook or Instagram or wherever if it matches your beliefs without stopping to think, “Hey, is that really true or might it be false?” Even in research, we tend to be much more willing to question our findings if it looks like they came out the opposite of what we were expecting then if they came out exactly as we predicted. That tendency to just not double check ourselves can get us into trouble.


Mills: Well, you just mentioned the world of science, and I know in your book you’ve given several examples of scientists, including psychologists unfortunately, who have gotten away with fraud and deception, in some cases for many years. How common is scientific fraud and how should non-scientists think about this when they’re trying to read and make sense of the science news that they may be reading in publications that they look at all the time?


Chabris: I don’t know if we have a good estimate of what the base rate of scientific fraud is because it hasn’t all been uncovered. We can’t go back to the last 100,000 publications and count up how many were fraudulent or had other kinds of fabrications or anything like that. It’s probably more common than the detected cases so far would have you believe, but it’s probably not as common as some of the biggest critics of scientific fields would want you to believe either. Most scientists that I know—and I’m sure that Dan knows—most scientists in the world are trying to get accurate results. They’re trying to report what they actually did. They’re not deliberately falsifying things.


But in a climate where it’s reasonable to expect that scientific results that are reported are honest and true and not false, a few people can get away with inserting themselves into the process and deliberately creating things that will then slip through and get fabricated results in the literature. I think that the general public should be skeptical of specific scientific results if they detect signs like, well, this is based on a very small sample size. It’s based on a single study. It hasn’t been replicated by independent researchers. It’s getting hyped quite a lot in the media, and yet nobody else seems to be able to reproduce these findings. It has a gigantic impact from a very tiny intervention, that’s a sign that something might be overblown or misleading or perhaps even made up. But when you encounter the general considered opinions of large bodies of scientists who’ve really looked carefully at work and done dozens, hundreds, thousands of studies over years, those are the kinds of things that people should really trust and rely on.


Simons: Just add to that, we tend to have this impression that discoveries happen in a bolt of insight in a single finding, and that’s not how discovery in science works. If we want to make a claim that something is a truth about people in psychology, that’s often what we want to do. We want to say this is a truth about people, their behaviors, their beliefs, their traits. You’re not going to typically come to a definitive conclusion about all people from one study, especially very small studies. Discovery involves incremental understanding of an idea and testing it in a wide range of contexts to make sure it actually applies more broadly. It might be a discovery about one tiny little sample at one time maybe, but if you don’t test whether it’s general to a lot of contexts and to a lot of people, it’s not really a broad discovery in the sense that we tend to think of in science. I would be wary of single studies. “This is a new study that’s told us something that we didn’t know before and that nobody ever thought of before.” That tends to be something that’s getting over-hyped.


Mills: Now, a lot of scams involve promising something that’s too good to be true, like an investment opportunity that offers unheard of returns or a company that promises a miracle medical cure. If you’re offered something that sounds like an amazing opportunity, what are the questions to ask yourself to decide if it falls into that too good to be true category?


Simons: I can start with that. We all know that when something is too good to be true, we shouldn’t believe it. That’s something that everybody kind of knows that if it seems too good to be true, it probably is. The problem is what sounds too good to be true to one person is just good enough for another person. So for somebody else, they’re going to find that as just plausible enough that they’re going to jump in, right? It’s the challenge in deciding when something’s too good to be true is in making sure that you view it kind of skeptically from the beginning.


If something seems really promising, really amazing, you should take a step back and say, “What would I need to know in order to verify that that’s actually true?” We know that if somebody offers you 50% returns on your investment with a guarantee of no losses all within a year, that’s almost certainly not true because no investment does that. You can take a step back and say, “What would they have to be doing to be able to make that claim?” As soon as you start questioning it a little bit, you might realize that no, this really actually is too good to be true.


Mills: I was just going to say, what about the case of Bernie Madoff, where it was too good to be true, and yet he was paying people, so you didn’t really have a reason to question other than, “Well, the market isn’t behaving this well, and yet I’m still getting 15% returns every year.” How do you escape being in a situation like that?


Chabris: Too good to be true, as Dan said, can mean different things to different people, but there are some signs that are pretty common for things that are too good to be true, or at least, that shouldn’t be believed. In the case of Bernie Madoff, what he was offering wasn’t really necessarily too good. It was not a really huge rate of return. It was 8–12% a year, which is a little bit above the long run average of the stock market, but not too much. It’s nothing like what an original Ponzi scheme was offering or what a lot of cryptocurrency recently have offered. The telltale sign was that it was every single year, there was never a losing year and there was barely ever even a losing month. He managed to make the returns on his investors’ investments fit within that narrow band.


The excessive consistency there was, I think, what trapped people. They thought, “This is a sure thing. I will never lose. I’ll get the same amount per year.” Kind of like a treasury bill or some kind of bond or savings account, but just with a much higher rate of return. That was, I think, the innovation. Really, the true novelty of Madoff—it’s called a Ponzi scheme, but some people call it a Madoff scheme because he really added this new dimension of not offering people the ridiculous proposition that educated people who’ve been in the stock market for a while know is unrealistic, the 50% in six months or a year or something like that, but offering something that seems much more reasonable—it wasn’t too good to be true in the sense of the total amount of money they made. It was too good to be true in the sense of the consistency with which they made it. Consistency, or the lack of it, is an important sort of sign of fraud that people can pay more attention to than they might otherwise.


Simons: That happens in science, too. One of the hallmarks of fraud in clinical trials is if the baseline conditions—when people are measured at the beginning, you’ll often do checks to make sure that the people assigned to the treatment and the placebo group are the same in other ways, so that those aren’t sort of contaminating the effects of your treatment. You kind of expect the two groups to be balanced on prior health conditions, for example, or age, and if they’re all perfectly balanced, that’s weird, that’s a red flag, because just by chance, if you randomly assign people to groups, they won’t all end up perfectly balanced. You’ll end up with a distribution with some differing on some factors and not on others. It’s turned out that that’s been a major way of finding fraud in a number of disciplines from anesthesiology to any field that has clinical medical trials.


Mills: Your book warns people to be wary of the butterfly effect. What do you mean by that?


Chabris: Well, so the old anecdote goes that a butterfly flapping its wings in Brazil can cause a tornado in Texas, and that’s based on sort of old, well, not old, but relatively new ideas about chaos theory and certain attempts to model how small changes in one place can perturb a system so that some dramatic thing happens someplace else. People tend to be too accepting of the idea that tiny little interventions in the world, not random things like butterflies flapping their wings, but maybe very small, very brief interventions; for example, giving a half hour or an hour-long sort of reflective exercise at the beginning of college, can cause big changes in people’s GPA and willingness to stay in college and complete the degree and so on, just to give one example of this class of effects. Another one is priming studies, the idea that sort of very brief exposures to stimuli we don’t notice or remember encountering can dramatically change our behaviors or beliefs just a few seconds, minutes, or even and longer later.


There’s one particular dramatic example where a study claimed that subliminally seeing the Israeli flag caused people to be more likely to vote and vote for centrist parties as opposed to more extreme parties days, weeks later in an actual election. That’s just the kind of thing that you should look up and say, “How plausible is it that something that happened to you for 150 milliseconds in October changed what you did in the voting booth in November?” There are many, many studies and stories like this, which we sort of take to mean that there’s an inherent appeal to the idea that we could discover these little tiny things that made a vast difference, and if we did discover them, that would be great. It’s not irrational to say we should want tiny, cheap things that make huge outcomes. A vaccine is one of those things. It’s a tiny little injection that you get once that might prevent you from getting a disease for the rest of your life in some cases. It’s just that those are rare events in science. They’re not the things that are coming out every day in our psychology journals, unfortunately.


Simons: That’s a sort of general principle—that anytime you see a big effect from a small intervention, that’s when you should require the strongest evidence. You shouldn’t go to the bank on a single study showing something like that because the odds are good in those sorts of cases that it’s not going to replicate in the same way that we think we are or it’s not going to be general in the way that it was claimed. Those are going to be the effects that are sort of really engaging and exciting and more likely to be wrong because it’s really rare to have giant effect sizes from tiny little variation, tiny little interventions.


Mills: Do either of you have a “favorite” con or scam you came across in researching the book—and I’m using the word favorite in air quotes here—and why is it your favorite and what can we learn from it?


Chabris: I’ll start with one that I knew about before we started writing the book, but I learned a lot more about it in researching the book. That’s the famous case of John von Neumann. You may have heard of John von Neumann as a famous economist, mathematician, computer scientist, game theorist—one of really the fathers of sort of modern science—but it was also the name taken by a guy who entered a chess tournament in 1993 under that name. I don’t think it was his real name, but he entered a chess tournament under that name in 1993. This was 30 years ago—actually 30 years ago this month, in fact, July 1993. Nobody had ever heard of the guy before. He didn’t even play chess; he didn’t even touch the pieces and moved the pieces as though he had ever played in the chess tournament before. He looked like a complete outsider and novice.


Yet, he drew with a grand master and beat masters and was doing really well in this chess tournament; in fact, tied for a prize at the end. He didn’t win every game. Some games, he would just sort of suddenly stop playing for no reason for 40 minutes and just let his time run out. Or, some games, he would try to make illegal moves and just sort of stop. At the end of the tournament though, he had won enough games to win a prize, but everybody was suspicious about how this guy could be doing it. They didn’t really know exactly how, but they thought this can’t happen. The tournament director, a guy named Bill Goichberg, who I’ve actually known since the very first chess tournament I played in, didn’t want to give money to this guy who he thought might be cheating somehow, but he didn’t know whether he was cheating or how he was cheating.


Before writing him a check, he set up on a chess board a very simple chess puzzle that a five-year-old who had learned the game a month earlier could probably have solved. He refused to even try it, and he walked out and essentially was never seen before. The reason why I love that con so much—and it’s generally agreed now that, basically, what the guy was doing was probably concealing an earpiece and getting information transmitted to him by an accomplice who was using a computer to help figure out what moves to play and was sort of sending code to him through an earpiece saying, “Move this piece from here to there.” But, if the transmission broke down, he would just stop and wait to hear something, and that didn’t help him solve the puzzle at the end.


The lesson that we drew from this for what you can do yourself is it would be tempting to just write the check and move on, and as our habit of efficiency sort of leads us to just move on with things, like make quick decisions and so on. Goichberg thought to ask one more question of this guy and do a little tiny test and see whether he really was what he purported to be, that is, a chess master. In 10 seconds, he had his answer when the guy refused to do it and started arguing and left. We advise anybody to really think about what questions you could ask. Sometimes, it only takes just one to get a crucial additional piece of information that will let you decide whether you’re being conned or scammed, or whether something is legitimate. That’s just my favorite example because of the whole story that surrounds it.


Simons: Yeah, I could talk about one that I think is more likely to be encountered by you or me, which is what’s known as a call center scam. These are scams that we all have encountered. If you’ve gotten calls asking you to call back to extend your car’s warranty, the vast majority of those are scams, because most of us don’t need to do that. You’ve probably never called back that number, but anybody who has selected themselves as somebody who is likely, or more likely at least, to send money. It’s the same way that the famous Nigerian email scam works—that it looks ridiculous on its face to most people, but it doesn’t have to convince most people. It only has to convince the people who find it really appealing and who are most likely to fall for that particular pitch. You can send out millions of emails and only reach a tiny number of people who are going to respond, but that’s exactly what you want. You only want to interact with the people who are most likely to give you their bank account number.


Call center scams have taken on a new form, which is to put people under huge time pressure and threaten them. For example, saying, “You owe money to the IRS and unless you pay it right now, the police will come knock down your door.” What they often will ask people to do is go out and buy a prepaid cash card and then read the number over the phone. Most people aren’t going to fall for that, but they’re really good at their pressure tactics so that if you feel like you’re in danger, if you feel like you’ve maybe done something wrong and you’re not sure, there’s a really strong pressure to just pay up and be done with it. Of course, as soon as you pay up, they’re going to be asking you for more money. I think it’s a really interesting case because this has been a really successful industrialized business fraud that’s been happening.


People should keep in mind that no government agency or formal corporation or any organization that’s asking you for money ever asks you to go buy a cash card and read a number over the phone, ever. That’s always a scam. Nobody’s going to ask you to go out and buy cash cards. The other thing is that there’s never going to be a case where somebody’s going to call and allow you to pay off if the police are on their way to your door. The IRS doesn’t send the police to your door based on a phone call and not paying immediately. There’s never that much rush.


These call centers have refined their techniques to put you under a huge amount of pressure so that you want to just make them go away. That’s been a really effective tactic. If you get that sort of a call, you can safely hang up because if it’s true, they’re going to email you, they’re going to send you a letter, they’re going to ask you to react in some formal way, and you can call the official number, look it up on the web yourself. Don’t call the number they give you; look it up on online and call the official number and see if it’s real. More often than not, it’s not.


Chabris: Dan’s story is a great example of, I think, why we sort of fall for this gullibility myth, right? Because we hear a story like this and we think, “I would never believe that the IRS is sending the police to my door to collect my tax bill with Apple iTunes cards.” Or whatever. “Who are these people who fall for this stuff?” Well, of course I’m a scientist and scientists have been defrauded, but they’ve been defrauded in a different way from that. They’ve been defrauded by people who understood how to defraud scientists by making up plausible research results and getting them past peer review and so on. Everybody can get conned and scammed, but the way is going to be specific to everybody and people who think they’re sophisticated and superior, they can be tricked in other ways, maybe not by that. There’s something out there for all of us in the world of scams, unfortunately.


Mills: I mentioned AI in my introduction as a kind of technology that may be making it easier for people to cheat and lie in all sorts of new ways, such as students using ChatGPT to write their papers or people using AI tools to write fake news stories or doctor photos and videos. Do you think AI is going to lead to a big rise in deceptions and scams of all kinds or even new kinds?


Simons: I think one of the themes that we’ve run across over and over in researching this work is that the same principles have applied to scams as long back as you can look, right? The same sort of taking advantage of our habits and hooking us with the information we find appealing, that’s been true going back to the Trojan horse. There’s not anything new in that sense. There are new variants of old scams that emerge all the time. AI might allow some new variants of scams to become much more potent. Here’s one case where I think it’s particularly going to be problematic. There’s a scam right now that’s been pretty effective that is sometimes called an injury scam or a kidnapping scam, where somebody calls up a parent or a grandparent and says, “Your kid’s been in an accident. We need you to send money right away so we can get them treatment at this hospital.” It’s not actually a hospital.


Again, it’s a time pressure, preying on fears, preying on your desire to help right away. Often that sort of scam depends on people being targeted because they know their kid isn’t home and they’re taking advantage of a little bit of information they’ve picked up about the parents and the kid on social media. But imagine how powerful that can be if, instead of calling up and having it be somebody else that you don’t know making that call, if it’s a synthesized version of your kid’s voice. Makes it that much more believable. I think a lot of these sorts of things potentially could be amplified as AI capabilities improve and develop. Yeah, that’s a concern, and there are things you can do to try and prevent that in advance. One thing I’ve talked about with my family is having a passcode for our family so that if we ever are in doubt whether this is true, you just ask for the passcode and if they don’t give it to you, it’s a scam.


Mills: That’s a good trick. Here we are putting our voices out there on the internet for everyone to copy. One of the things that we do at the American Psychological Association is our IT department sends fake phishing emails to employees, and you’re supposed to report them as a training exercise, and of course there are always people who fall for them. Do these strategies work? Can you be taught to be alert to scams?


Chabris: I think you can. You could start by reading our book, but often the specifics of a particular scam, those will evolve and change over time. These particular phishing emails that the IT department sends are designed to kind of look like the phishing emails that the people who run these business email compromise scams are using these days. They won’t perhaps permanently inoculate you against every kind of scam, but I think they’re a reasonable form of training and that they also help to generate information, like 25% of your colleagues fell for this thing, which can maybe raise awareness more broadly if the scale of the susceptibility is known. There have been various research studies that have found surprisingly high percentages of people willing to turn over their passwords and other credentials to these kinds of scams, but they’re not a permanent solution. I think to the extent there is a permanent solution, it really involves understanding the patterns of what the scammers are trying to do and what in particular they’re trying to target about our own habits and trying to react to that more.


Simons: There’s also a downside to constantly doing that sort of check, which is it makes you distrust emails that seem to be coming from your own administration and your own company. If you’re constantly getting faked emails, it makes you kind of like, “Well, should I believe that this actual administrative email is genuine or do I just need to assume that they’re suspicious of us all the time.” There’s some potential backside to that, but training people to recognize the hallmarks of that sort of scam and that sort of phishing is a really useful thing to do.


Mills: To wrap up, I want to ask you both, has working on this book and doing this research changed any of your own habits? Has it made you less trusting or more trusting in life?


Chabris: I do think that having looked at such a wide variety of different kinds of cons and scams throughout history, I learned a lot about ones that I never knew about. It definitely, I would say, has made me more suspicious—hopefully not in a bad way—but also, I think made me a little better at recognizing the patterns that you see among these things so that when you see some new offer or some new thing in a news story or whatever, you can put it to a framework where you say, “Oh, I understand what they’re doing in this thing.” And that makes it easier to classify, easier to remember, easier to think about how you might avoid it, or what you might advise somebody else to do. I think overall it’s been quite a positive, even though, on occasion, it has been dismaying to see what people are doing and what people are falling for.


Simons: I think one of the big challenges in this context is thinking about when you need to be more skeptical—when you need to be more critical—because you can’t be a skeptic all the time. That would be a terrible way to go through life. You’d be constantly cynical about everybody and everything, but knowing when it actually matters and thinking about when this could be consequential for me, I think, that’s been something that looking through all of these sorts of different scams and over history, it gives you a sense of what the patterns are for big risks and helps you to think about, “Okay, do I need to worry about whether the grocery store has given me exactly the price that they promised on the shelf?” Well, if a couple of pennies here and there don’t matter to me, I can kind of give a sense of, is this over overall close enough?


I don’t necessarily want to check everything in detail, so I can kind of let the sort of small scams and small deceptions, that probably happen unintentionally all over the place, let those go and not worry about them too much, and really think about it when there’s something that could be very consequential.


Mills: Well, Dr. Simons, Dr. Chabris, I want to thank you both for joining me today. This has been really interesting.


Simons: Thanks. It’s been fun chatting.


Chabris: Yeah, thanks for having us.


Mills: You can find previous episodes of Speaking of Psychology on our website at www.speakingofpsychology.org on Apple, Spotify, YouTube, or wherever you get your favorite podcasts. If you like what you’ve heard, please leave us a review. If you have comments or ideas for future podcasts, you can email us at speakingofpsychology@apa.org. Speaking of Psychology is produced by Lea Winerman. Our sound editor is Chris Condayan. Thank you for listening. For the American Psychological Association, I’m Kim Mills.


Why we get conned and how to avoid it?的評(píng)論 (共 條)

分享到微博請(qǐng)遵守國(guó)家法律
台安县| 包头市| 霍邱县| 新营市| 治县。| 中阳县| 象山县| 八宿县| 台南市| 凤台县| 清河县| 永平县| 池州市| 蒙山县| 教育| 兴隆县| 武功县| 玛曲县| 喜德县| 永定县| 蓝山县| 吴堡县| 清镇市| 梨树县| 乐亭县| 温泉县| 邵阳县| 阳城县| 博客| 商丘市| 利津县| 彭泽县| 阿拉善盟| 雅安市| 盈江县| 钟山县| 肃北| 巨鹿县| 习水县| 茌平县| 尚义县|