Exactly. The problem here is people, not the data viz.
The problem here is people, not the data viz.
Perhaps more that the visualization is intended for a different audience.
I’ve struggled with visualizing 5-D data, or 3-D with multiple chemical fields. If the intent is to illustrate a scientific point you’re going to get a different product than you would with, say, an alert system. (It’s been a while since I was current on this, but) I recall it being standard in chemical process control, and it’s a simple rule, that all alerts must correspond with a documented action to be taken by the people receiving the alert.
I read these charts more as a scientific communication product than an alert system. These are to be consumed by the people who will then send out alerts and plan the actions to be taken, with limited resources, based on probabilities of outcomes.
Actually, I really like the spaghetti plots since they display the true extent of the variability in forecasts with different models. Each one of those is a best effort by some seriously bright and studious people. However, I’ve also trained extensively on those kinds of simulations and where some might see confusion I see scientific openness…
I guess I don’t have too many problems understanding the various hurricane charts produced by NOAA, especially since I live within half a mile of a beach. My safety may depend on it, so I make damn sure I understand potential threats. I don’t immediately see a good way to present the information in a more readily accessible fashion.
Hurricanes. Suck. Crap. That is all.
Right. It’s utterly impossible to make a graph so easy to understand that a stupid person can’t fail to understand it. There’s a limit to how straightforward a graph can be. There’s no limit to stupid.
Look at that spaghetti plot above: that’s the real data a meteorologist (i.e. someone trained in the art) would look at. The expanding probability chart is already vastly simplified, and to my eye the alternate maps such as the “earliest reasonable arrival” one are worse. There’s purple and green sections? I guess I’m A-OK if I’m in a green area, no risk at all!
The solution for utter nitwits isn’t a better graph, it’s reading the graph for them and telling them to get out.
(Hm. Mar-a-lago.)
Come ooonnnnnn, Dorian!
Unfortunately, the nitwits don’t “get out.” They stay. And require rescue. And resources. Even the best graph (or the best telling) won’t help because there’s no such thing as foolproof.
Me too, but it really helps if you already know which of the models has a better track record.
Well said. Thank you.
The top of the “warning cone” graphic has this at the top:
People just need to RTFM.
It doesn’t matter how accurate your GPS is. The key term in this entire dissertation is “uncertainty.” If people want to live their life preparing for disasters based upon degrees of uncertainty, I hope they won’t miss all the time they waste and lose…
It’s not too difficult to figure out what these forecast maps predict, and the potential threats that might arise, if a person spends just a few minutes reading the accompanying information (NOAA does a great job of providing easily-understood info in layman’s terms). For all the wizardry at our fingertips, none of it can think and respond for us…
Yes, people need to read Tufte more and add stuff to plots less. The solution is to understand what needs to be communicated and design for that. Not use digital animations to blindly jam more kruft into the frame.
It’s interesting that, in the Hurricane Irma example in the linked article, the second location of Irma (slid 6) is not within - or even that close to - the initial “cone of uncertainty” (slide 5). So, not surprising Miami didn’t evacuate. The range of paths graphic (slide 7) does not show the slid 5 position, but if you were to overlay these later-in-the-week paths on that initial position, you would see the outlier “hang a left” path would have gone right over Miami. I’m guessing there were similar outlier paths off the first position as well.
So my solution to the problem posed by the article, and all the yammering on this bbs, is to replace cone of uncertainty with range of paths. It’s that simple. Range of paths - as represented in NYT - has clear “bright red” paths of high probability (due to overlapping of many paths), and clear “faded red” paths of low probability, but gives a much better idea of the range of paths without the “is it getting bigger or what?” problem. I think we can all agree a 60-70% probability cone isn’t nearly as useful as these paths in conveying the range of possible targets to any human, be they erudite/arrogant boingboing reader or average swamp-critter Joe.
that is a bold, and highly unlikely, claim.
it generally takes a person who is well versed in their field and able to think outside it to produce data viz - that as others have said - fits the audience.
it’s not my field, nor id bet is it yours.
generally, if a large percentage of an audience cannot easily grasp what a person is trying to say: the fault is the speaker’s not the audience. the speaker is the expert, so the obligation is theirs.
“There’s no use trying,” she said; “one can’t believe impossible things.”
“I daresay you haven’t had much practice, when I was younger, I always did it for half an hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”
A range of paths with varying degrees of certainty expressed by color intensity makes sense because:
- people suck at understanding randomicity and probability. When the ‘shuffle’ option came out on iPhones, people said it couldn’t be random b/c it payed the same song twice in a row!
- the visual of a cone is expressing two variables in one visual item (location and uncertainty of the location), and the variables are interrelated yet both are varying so it is hard to tell what is the actual danger at what point. Our brain then says “outside the cone is safe”, though there is a tapering non-visible zone of danger outside the cone.
- one color implies certainty, whilst multiple colors imply varying degrees of certainty.
- one cone implies certainty, whilst multiple paths indicate uncertainty, and increasing uncertainty as the diverge.
I gotta go to bed now.
Sure, calling people stupid is a great way to solve all kind of problems.
The talk of “stupid people” here is so inane - it’s dataviz, not your fav underground music album that mom just isn’t with it enough to get.
Believe it or not, there’s a lot of people who don’t share the exact same knowledge base as you, but do have knowledge and experience. Not being able to immediately intuit what a rainbow snowcone communicates is not a sign that they are simply beyond all hope.
There’s never going to be absolutely perfect way to communicate something to everyone with the same image, but I think we can do better than “see this blob? Being in the blob means you are in danger. Being outside the blob also means you are in danger”
maybe they could put pop-ups on maps even
I would imagine you could animate the calculated paths of a tornado on top of the regular risk plot. The viewer would then see storms of roughly constant size scooting about, mostly within the resk region, but occasionally scooting outside of it.
Or will someone think there are hundreds of tornadoes heading their way?