SO YOU’RE SAYING THERE’S A CHANCE — With six weeks to go until Election Day, both in national and battleground state public opinion polls, the presidential election looks incredibly close. The most popular political forecasts that employ probability models indicate the same thing, only through different methods. These are the forecasts that aggregate polling data, sometimes factoring in other environmental factors (how much time until the election, whether a candidate is an incumbent, what the unemployment rate is) in order to come to a probabilistic conclusion on who is likely to win. In its most recent update, one of the best known forecasts, Nate Silver’s Silver Bulletin , gives Vice President Kamala Harris a 53.2 percent chance of winning, compared to former President Donald Trump’s 46.6 percent shot. The Economist has Harris at 57/100 and Trump at 43/100. The Hill/Decision Desk HQ projects Harris has a 55 percent chance of victory and Trump has a 45 percent shot. And 538 says that Harris wins 58 times out of 100 while Trump claims victory 42 times out of 100. The trouble is, it’s not entirely clear if most people understand the purpose of the models or what they tell us. Studies have shown that these kinds of forecasts can be confusing to voters — causing them to conflate a candidate’s probability of winning with their expected vote share. For example, they might look at the 538 forecast (which shows Harris with a 58 percent chance of winning compared to Trump at 42 percent) and believe she’s ahead of Trump by a significant margin — 16 points. Now, if Harris were beating Trump by 16 points in the national polls , that would indeed be big news — it might be the harbinger of a landslide. But that’s not at all what the probability models are saying. In an election as close as this one, in a nation as divided as ours, distorted perceptions about the outcome can be a dangerous thing. What’s important to understand is that the concept of probability of victory and the results of actual public opinion polling are different. Each of the above websites uses their own model to determine the probability that they spit out — some heavily weigh polls from some polling firms above others, some price in the idea of a “convention polling bounce” or an “incumbency advantage” into their models. It means that some models are much more reactive to different moments on the calendar and different polling inputs than others. Consider these examples. Over the last two weeks, the same polling has been available to everyone. Yet on Sept. 9, The Hill/DDHQ gave Harris a 54 percent chance to win and The Silver Bulletin had Harris at a more modest 35.3 percent chance. Since that day, The Hill/DDHQ’s forecast barely moved, but The Silver Bulletin’s experienced a considerable shift. If you only looked at the former forecast, you’d believe the race has barely changed. But if you only looked at the latter, you’d think there’s been some real movement in the race. Around this time in the election calendar, people get desperate for any news. So Silver’s model, which is much more reactive than The Hill/DDHQ’s, drives a lot more conversation. But even his relatively large change isn’t as significant as a lot of pundits would have you believe. You’d rather have a 53 percent chance of success than a 35 percent chance, but both are well within the realm of possibility. And this brings us to the question of the value of political forecasts in the first place. Silver’s critics often cite 2016 as an example of how he was wrong — he gave Hillary Clinton a 71.4 percent chance of winning and she lost. He cites it as an example of how he was right — other prediction sites and betting markets gave her an even higher chance of winning, whereas he was more bullish on Trump’s chances (in his most recent book concerning people who break norms, Silver explains how people who gamble on elections told him that in 2016, his model helped them make money , because it was more bullish on Trump than the gambling markets). And Silver wasn’t really wrong. Outcomes with a low chance of occurring happen fairly frequently across our daily lives. Just a week ago in an NFL game, the Atlanta Falcons had just an 0.7 percent chance of defeating the Philadelphia Eagles with under two minutes to go in the fourth quarter. They staged an improbable comeback and won. Given that we have many more results in pro football than in presidential elections (the NFL has 272 games over the course of a season, whereas there’s only one presidential election every four years), it’s much more likely that something with a 0.7 percent chance will eventually happen on the football field. And yet, it’s possible in politics as well. Without more results, it’s impossible to have a conclusive answer, even after an election, for how “accurate” any model truly is. So, for the election forecasting-interested among us, by all means continue to compulsively check how the candidates appear to be doing. Just don’t conflate them with the polls, and remember that probability is just that — it’s not a hard and fast prediction. Strange things happen all the time. Welcome to POLITICO Nightly. Reach out with news, tips and ideas at nightly@politico.com . Or contact tonight’s author at cmchugh@politico.com or on X (formerly known as Twitter) at @calder_mchugh .
|
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.