Nate Silver admits he got played by the GOP but blames the Democrats for not using poor polling practices

This is the big problem with much of social science right now also, which relies heavily on self-reported surveys. How you word the question can swing the results entirely from one extreme to the other. Even tiny word choices have an alarmingly large effect. This should render them unacceptable to science and political polls that care about truth, but the combination of the demand for the data being high and there being no better ways to do it yet mean we’re stuck with garbage in, garbage out

12 Likes

Even then, the BB write up is based on a Tweet from Tom Bonier, which IMHO, may have misinterpreted the point that Silver was making. I’m asking when and where Nate Silver himself actually said that his work wasn’t as accurate as he wanted it to be or that changes in Democratic polling practices would help his accuracy.

2 Likes

I agree with this, but I think it’s unclear what direction this influence goes. Does predicting a red wave for months motivate D voters to come out and make R voters complacent? Or does it demoralize D voters so they stay home? I don’t think that’s clear.

7 Likes

All I have to go on is the BB write up and the screen shot.

If we ignore the BB write up, and the tweet content that posted the screen shot and look only at the screen shot of Nate’s tweets (assuming those are correct), it’s still not “great”.

“538s average can adjust for them to some extent” - Some but not all. So the biased polls impact the model in a presumed negative way.

“D-leaning pollsters could release pools too” - Implying that D-leaning could alter the predictions of the model by releasing more biased polls and impacting the results. Additional gaming of the results instead of the modeling adjusting to deal with biased (garbage) inputs.

“it is worth noting that the composition of the polling averages is much different, and more R-leaning, than it was” - An admission that the data input is R-Leaning. In context with the first statement that the bias is impacting the model results and that adding more bias the other other way could impact the model output also.

“to register an opinion” - Which implies that the model is a representation of the opinions of a bunch of pollsters, not so much a representation of what the people being polled think.

At least, that’s how I read those 3 tweets in the screen shot.

3 Likes

But that sentence is preceded by the one “I think generally the complaint that GOP-leaning pollsters are ‘flooding the zone’ is not sharp.” Which I interpret to mean that he feels it isn’t affecting his model much because they account for it. But obviously some people here have a different read on what he said so whatever.

Edit to add: I’ll bet we can all agree that Twitter is not the best forum for clear, nuanced conversations, right?

2 Likes

When we had a landline we got tons of calls around election season, but oddly I’ve never seen anything online or gotten an email to take a political poll…are the pollsters truly that far behind technology? Or am I just not spending time in the “right” online environments to get polled?

6 Likes

If we assume that they’re not impacting the model, the rest of the tweets don’t make any sense at all.

The “adjust for them to some extent via house effects” is doing a lot of work here. If those adjustments are dealing with them and negating any of the biased impacts, then the rest of the comments don’t make any sense. Adding in D biased polls wouldn’t shift if the other way, they would just be adjusted out as well.

I think the one that get’s me is the last tweet. The one that says the polling averages (which is what his model is doing) is functioning like a betting market. That it’s reflecting the opinion of whatever pollster is willing to pay for the poll and any impact to the pollster’s reputation.

If I wanted to know how the betting markets thought things would go, I would look at one of the betting markets. Isn’t the entire point of polling to gleam what people are thinking? And of polling averages to even out differences between different polling data sets. Is he saying the only difference between fivethirtyeight.com and predictit.org is who is able to place a bet to impact the graph display?

2 Likes

I don’t claim to know the answer, but I think the real effect is to get independents behind the “winning” side. If it looks like one side is going to win, less decided voters lean that direction. It’s the bandwagon effect.

6 Likes

The way I read @mmascari’s point, your interpretation reflects even worse on Silver. It’s undisputed his conclusions were flat out wrong. If he thinks his model accounts for the bad polls and it was still that wrong, then he might as well soak it in gasoline and toss a match on it.

6 Likes

I kind of wonder if influencing voters is the point at all. Companies donate more money to the candidate they think is going to win.

4 Likes

Is it? As I noted above he previously said that the pollsters had a good night. And the results of the election don’t seem to be too far off of 538’s final projections. We don’t know the final results in the House yet but I would bet that it’s well within the bell curve here:

2 Likes

I had defended Silver before with Trump’s election – he gave him a 29% chance of winning which is not at all incompatible with the fact that he won. As we repeat, though, it really does start to seem like outcomes are not clustered around his expected values the way they should be.

He says the model can handle the skewed data it’s getting, but if the only way it can do that is with large error bars (I mean, everyone knew that was the range of likely outcomes for this election) it might not be a very useful predictive tool. And to me that’s kind of the problem with him. He seems to have more of an interest in selling people on how useful it is rather than the appraising how well it works.

5 Likes

Ok, let’s look at objective data then! Exactly how far off have the projections been? And how to they compare to others in the business over the last several election cycles? So far in this election cycle it seems pretty darn close to me overall.

A lot of people don’t seem to remember that in 2016 Fivethirtyeight was widely criticized for putting Trump’s odds of victory way too high, but they were much closer to getting it right than most other reputable forecasters.

If you want to say that the error bars are so wide that the forecasts are useless, fine I won’t argue with that. But personally I prefer the forecasts that present a transparent, probabilistic outcome to the ones that make confident predictions that may turn out to be wrong.

2 Likes

I do too. I just really wish people would look at the error bars and say “oh, that means we don’t know right now” without attaching a useless “but…” to it.

7 Likes

Maybe we should entertain the fact that the whole process around “predicting” the elections are deeply flawed, and in fact, probably hurt electoral turn out more than it helps us to understand what is actually happening in elections.

I think we also need to wrap our minds around the fact that a mass mediated society is a very difficult thing to pin down and understand, and that numbers aren’t always the best way to figure out what’s actually happening.

:woman_shrugging:

11 Likes

Was that really the prediction? I was just reading the tweets and commenting on what they said.

Was the prediction really that the House would be between 221 D and 246 R? A 32 seat swing, 7.36% of the seats in the house.

I would say this is correct. It’s not a prediction, it’s fun with graphs. Graphs that his tweets say can be gamed and impacted directly by pollsters if they want to.

With a range that large, how is the prediction any better than some guy just making it up? What’s the analytical value being added to create a more fine grained prediction.

2 Likes

Yes, although they use the term “forecast” rather than “prediction.” (Kinda like when the weather report says there’s an 80% chance of rain.) That chart was part of their final forecast for the midterm. Here’s the whole thing before it got locked down on the last day:

2 Likes

Yes. If the actual outcome is on the fringes of your bell curve, your model is innacurate. If we worked this way in medical device design, there would be a lot of exploding pacemakers.

And I’m one of the folks who point out that 538 showed the shift that occured due to Comey’s interference just a few days prior to the 2016 election that predicted Trump had a solid chance at winning (and just how much influence that interference had). But 538 was a hot mess in both 2018 and 2020, and clearly didn’t adapt enough to make improvements for 2022.

7 Likes

We’re still waiting to learn what the actual outcome is, so we can revisit this in a couple days, I guess. But I’m not currently seeing predictions that it will be anywhere near the “fringes” of the bell curve.

Edit to add:
To be clear in what they were actually saying in their forecast, the claim was “per our model, we’re 80% confident that the number of house seats per party will land somewhere within this highlighted portion of the bell curve.” You can certainly argue that was an overly vague forecast with limited to no value. But if the final result does indeed land within that region I think it’s hard to say that “it’s indisputable that they were flat-out wrong.”

2 Likes

Without any polling, predictive input, or advanced knowledge. Today, 11/10 at 2:00 PM Eastern, based on the races called already and reading the Washington Post tracker looking at the leaning/hashed graphic. I’ll predict the House will end up somewhere between 215 R and 225 R.

If I’m right, while not done ahead of time, my prediction cost me nothing to make.
If I’m wrong, well my prediction cost me nothing to make and wasn’t based on any analytics at all, just a gut guess and after lots of results were known already.

Isn’t the 538 prediction supposed to be more accurate than that?

PS: It’s a good thing I’m not involved with pacemakers at all. :grinning:

2 Likes