Turning off your camera during Zoom calls is actually good for the environment

Do you seriously think that transmitting a video stream across the internet uses no energy at all?

Being as I have to look at the admin panel in Zoom fairly often for work and I can see the bandwidth being used, and whose processors are thrashing, might I humbly suggest that if you want to use less energy and bandwidth close some fucking tabs on Chrome? That piece of shit eats your data, eats your processing power, eats your memory, and shits data to the mothership.

But yeah, face isn’t always necessary and uses power and bandwidth (though less than you’d rhink). Use it wisely.

No, but I want to see their data to see how much of it is inflated hype.

Also, who funded it.

(I respectfully submit that the machines that make up the internet’s core backbone wouldn’t notice the difference between an audio only stream, or one with a video component power wise, which is where I came from in that rather bold statement.)

1 Like

No, but most of the energy used is not to support a video call, it is to support a maximum throughput. That energy is used even if the amount of data sent doesn’t hit the maximum.

The ARIS SB6183 in my office uses basically the same amount of power while I’m moving a lot of data as it does when I’m sending nothing (or nothing on purpose, I’m sure of the N devices in the house they are sending out the occasional “hey is this the latest software?”, and the iPhones are all saying “can I reach that one text file on apple.com that says I’m not behind some sort of WiFi paywall?” and various other things that make up the dull thumb of “not exactly zero data usage”). The WiFi base station and access points in the house also use a constant amount of energy.

Or at least they all use the same amount as closely as a kill-o-watt can measure.

I expect the upstream at Comcast is the same. I expect the routers at Comcast’s exchange are the same. I’ve never hand my hands on a upstream cable TV internet box, so I don’t know for sure. It has been 15~20 years since I had my hands on the big routers that get used at peering points, but at the time they were all constant power (and -48V DC). In theory you might save on power & cooling costs if they could scale that up and down with usage, but in practice those systems all run at at least 80% capacity all the time so any extra engineering effort spent in making the system actually reach a low power state would be better spent making the high power state even just a little more efficient.

Desktop and especially laptop CPUs get a lot out of being able to reach a low power state because they do actually get there a lot of the time. I’ll also note that even decades ago the switching/routing backplane of those devices was all custom but many were designed with a more conventional CPU working out southing tables and running BGP and other southing protocols. Those CPUs would have gotten the low power goodness almost for free by just replacing the CPUs every time a new southing product gets designed. However that CPU doesn’t really do anything when packets come in to get routed. It does stuff when southing tables need to change (or when they might need to change). So I could see them having a little variance, but not because you switch from 4K to regular HDTV. More because someone and over a fibre with a backhoe and now things need to adjust. Or because someone made a peering policy change, and again things need to adjust. Or because somebody has a router that is busted, boots up, works for a little bit and then can’t handle something, and dies, only to come back and do it again. That one is always fun.

So on the face of it, no, those packets pass around the internet on an already fixed energy budget. They will use extra power on your end, but maybe not even enough to measure. They will use extra power for anyone who would be displaying one more moving picture box, but again maybe not enough to even be able to measure. Since zoom isn’t point to point and drags all the data back to themselves to send it out again there is a 3rd point the uses a little more power, and again maybe not enough to measure.

On the other hand, the peak data rate that your cable company builds for, and all the peering points, and all sorts of other bits of the internet end up getting designed for all get influenced in part by how much data people think will be needed in a few years (or whatever their design horizon is). So consistently pushing today’s backbones to the limit mean the next one will be designed bigger (if it can be). Those future designed will likely use more power to pass more data (this isn’t always true, back in the 1990s every higher bandwidth systems used less power, or at least judging from the proxy of needing less heat mitigation, but they frequently used more).

So the thrust of “this stuff isn’t free” I believe. The actual numbers they came up with seem like unwashed hogshit though. Which is bad because someone will take these complete nonsense numbers and compare them with the energy of moving people around and can come up with very bad results (like “lets all drive to the office tomorrow for a meeting because that’ll use less power then zoom!”, which I can guaranteed you it won’t, even if people only live five miles from the office).

3 Likes

THIS. MANY THIS.

Had more than a few hour long meetings where I got sucked in only to spend most of it idle aside from the 30 seconds of “yes, our system can do this easily” or " yes, please send me the information", or worse: “yes, I’m waiting on the information I requested in the last meeting to do this”.

1 Like

Thank you; that was the point I was trying to make. :slight_smile:

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.