We continue to get fascinating little glimpses from John Mack's blog survey and I just discovered the chart below on his site.
I think this is all very interesting. What we don't have is data on p-values, statistical significance, number of responses, etc., so it is hard to judge how scientifically rigorous this data is.
The smaller the p-value, the more significant the result is said to be. For example, if someone argues that "there's only one chance in a thousand this could have happened by coincidence," they are implying a 0.1% level of statistical significance.
The lower the significance level, the stronger the evidence and if there are too few respondents, like only ten or fifty, then the result may have more to do with coincidence than reality. So while the blogs below that are red may indeed be more negative, the difference between them may not be statistically significant.
That's the theory.
But since none of this data is presented with any of the information needed to actually evaluate the data, it becomes less about science and more about entertainment.
Having said that, you may not be surprised if I think one has to be careful evaluating what you see below . . .