Originally published at: Large study finds no link between cellphones and cancer - Boing Boing
…
Wow, it’s still true. Milliwatt transmitters operating at microwave frequencies remain harmless. Unless you stab yourself in the eye with one.
I haven’t heard lately, but I wonder if any members of the anti-5G moron brigade are still pushing against cell phones, or claiming there are microchips in the vaccines, that flavor of nonsense? Not that I want to hear anything from them, I just wonder if they’re still peddling their BS.
… well it’s not like people hold phones up to their heads anymore
we should be worried about hand cancer now
Did the researchers find some problem with all the other EM-cancer studies? I don’t have the skills to know, but I’m starting to think that these are just easy studies to get funded.
The cell phones that cause cancer and those that prevent it cancel out.
I had a similar thought - since people stopped putting phones to their ears, like … well, phones, and started pointing the bottom of their phones at their chins, we should probably expect a greater incidence of lower jaw cancer. /s
… I know the amount of time I spend “talking on the phone” has dropped precipitously in the last 20 years or so
It used to be if we were paying by the minute we were being ripped off, now it means we pay almost nothing
I don’t understand it so it must be dangerous!
On the other hand, giving a smartphone or tablet to an infant is not doing children any favors. But that has more to do with frequencies of light flashing garbage into their brain.
Or butt cheek cancer?
giving a smartphone or tablet to an infant is not doing children any favors
Youtube Kids spammers rack up billions of views on disturbing, violent, seemingly algorithmic videos - Boing Boing
Indeed, the main harm of these devices is now, and always was, the emissions within the visible and audible spectra.
Did the researchers find some problem with all the other EM-cancer studies? I don’t have the skills to know, but I’m starting to think that these are just easy studies to get funded.
The problems with earlier studies is that there’s a complete mess of them.
This study was a “meta-study” analyzing five thousand previous studies on the topic, rejecting those that had various biases or were considered “weak”, and ended up selecting 63 scientifically rigorous studies for inclusion in the final review. The conclusion of this meta study was:
The review found no overall association between mobile phone use and cancer, no association with prolonged use (if people use their mobile phones for 10 years or more), and no association with the amount of mobile phone use (the number of calls made or the time spent on the phone).
Will this shut up the moron brigade? Doubtful. However, when they start making waves in the news again, this study is the best available information to give the journalists.
I always appreciate when reviews and meta-analyses like this highlight just how high a proportion of the studies in a given field are useless for drawing actual conclusions.
we should probably expect a greater incidence of lower jaw cancer. /s
I’m thinking thumb cancer from all the doom scrolling.
Yeah, it’s crazy that less than 1% of those studies made the cut.
I wonder about many of those left out. Did they cut studies just because they were funded by cell companies or other parties that had a vested interest in a specific outcome? What percent of bias-funded studies were less rigorous? Was any of their methodology sound?
I’m wondering if I can use this to help calibrate my BS meter. If I see “the authors gratefully acknowledge a grant from the Related Industrial Institute” in a paper, is that a 50% indicator of bad science? 99%? 10%? What about papers funded by “The Subreddit Opposed to Things”? I think we could learn a lot about what to trust, and identify the indicators that we shouldn’t trust a study.
It’s been a while since I had to conduct a deep literature search of anything, but in my experience at least 75% of papers in any field I’ve looked into are close to useless in terms of their conclusions. They may have good hypotheses or suggest an interesting direction via anecdotal evidence or even propose a viable experimental setup. But, then the experiment is so underpowered it’s incapable in principle of demonstrating anything, or they mess up the statistical analysis, or they try to argue for a conclusion that’s just not supported, or they ignore (or were unaware of) a dozen other more likely explanations. That’s all normal, and not all of it is necessarily bad - I think a lot of it comes from screwy publishing norms and requirements. In a healthy field, reviews like this come along and sort out the chaff while using one study’s strengths to shore up another’s weaknesses. In an unhealthy field, everyone takes some weak study as gospel truth for one reason or another and no one does the necessary follow-ups, sometimes for generations.
This topic was automatically closed after 5 days. New replies are no longer allowed.