Interesting, but I would have preferred an actual explanation of what confidence means in such a context, due to the fact that since we can't measure ALL the data, we have to relate the results we get from limited measurements to the answer we'd get from measuring ALL the data.
For example, say we're tasked with finding the average weight of bricks from a production line where we know there is some amount of variation. Obviously there exists an average weight, but we probably can't weight every single brick. So we measure a representative sample of them and calculate the average of that.
It's expected that while our measured average of the sample isn't equal to the average of the total population, but we can mathematically determine limits as to how confident we are with our result.
So if an omniscient being determined that the true average weight was 5.000 pounds, and the average from our sample was 5.047 pounds, a confidence level gives us a range where we know that 19 out of 20 representative samples will be within some known amount of error.
Where I think a lot of the misunderstanding comes in is that if our measurements find that the average weight is 5.047 pound with a confidence level of 95% that doesn't mean that there's a 5% chance the actual average is 10 pounds, or a hundred pounds; mathematically 95% is much better than it sounds.
So, since we can't measure every temperature that ever was, or every ice cap thickness, etc, we measure what we can, and from that get CLOSE, with a calculable range of error and probability thereof, to what the actual result would be if we COULD measure everything.