It’s probably a cry of pain considering we’ve harpooned it.
Did they release the raw data? It’d be fun to play with
We wish we’d harpooned it.
i’m in space. Spaaaaaaaaaaaaaaaaaaaceeeeeeeee…
Damnit, Xeni, that was supposed to be top secret!
Now the Men in Black are going to have to mind-wipe all of your readers.
i think it needs more beat. the kids all love the beat.
Not to be a party pooper - but really, it’s no more meaningful to turn this data from the magnetic field into sound (and therefore a piece of music) as it is to turn it into a painting, or instructions for a dance, or a map of terrain, etc. There are so many variables to take into account when you perform the translation that different interpretations into the same medium (like sound) could be completely different. Notice this recording is credited to a composer. He/she has tweaked the mapping from magnetic data into sound in such a way as to make it attractive and interesting, but that has nothing to do with the comet and the measurements.
I was so sure I was about to be rickrolled right then… But clicked anyway.
The comet also paints and dances?! Go comet!
We’re whalers on the moon. We carry a harpoon. But there ain’t no whales, so we tell tall tales and sing our whaling tune.We’re whalers on the moon. We carry a harpoon. But there ain’t no whales, so we tell tall tales and sing our whaling tune. We’re whalers on the moon. We carry a harpoon. But there ain’t no whales, so we tell tall tales and sing our whaling tune…
A comet is singing to us? How have we not hunted it down and killed it yet?
'Merica’s getting soft these days. Thanks, Obama.
Not entirely true. We’re pretty good at pattern recognition, and some may be able to spot the rise and fall of tones, the change of tempo, etc. more easily using audible representation vs. visual.
That said, a good DSP would provide more objective analysis of the signal.
I’m most interested in if the reverb is part of the observation. That would indicate a resonance in the system.
Just had a flashback. I saw this cheesy-but-earnest SF film on Saturday Morning TV as a kid.
It was about an asteroid, not a comet, but it did kind of sing.*
*Or, in the original novel, wail.
Well, I didn’t say that the sonification of the data wasn’t useful. Just that it’s not meaningful to say we’ve discovered a mysterious, secret song the comet is singing, the way it’s been presented in some overly credible media accounts. You can turn the GPS data from your bike ride into a song, or precipitation data from all the countries in the world into an enormous symphony, etc., and in the right hands with the right choice of timbres and the right conversion methods, a composer can make the data sing beautifully, whatever mundane thing it’s originally from.
I think this is my favorite thing I’ve heard anyone say this week. Makes me proud of us as a species.
Somebody needs to broadcast this to the whales and see what they sing back.
So the comet can sing but how long do you think it would last on ASTRONOMICAL OBJECTS FLOATING IN SPACE GOT TALENT.
I wish they gave more details than “the frequencies were increased”.
For me, if it’s just a case of playing back the samples faster, then it’s not unreasonable to say that it’s a sound - just as it’s reasonable to shift bat calls down a couple of octaves so we can hear them. It’s a very direct transformation, and the signal collected works in a similar way to signals we use to produce sound.
If it’s a complex mapping or resynthesis, then I’d agree that saying its making a sound is not really right, and you can use any data you want as a source for generating sound. Then it becomes a question of whether there is some utility or aesthetic value to the output: does it give insight into the process creating it? is it interestingly different to the output produced by other processes or noise?
Frustrating that we don’t know where it falls on that spectrum. But I reckon they’re working at breakneck speed to get things out at the moment…
You know, if this was a news report being given in the background of a film, something very bad would be happening immediately afterwards in the narrative.
It’s fairly obvious how to turn a datastream into an AM audible analog but how do you decide how to turn it into a two-dimensional image without arbitrarily electing width and height based on nothing at all in the data?
My first question was also - where did that reverb come from? What part of the datastream did they decide was analogous to reverb?
But - regardless - why not have fun with datasteams you encounter? Maybe you’ll find a lottery predictor.