11-23-2007, 05:11 AM
I have come across a skeptic that rather want to believe much smaller research instances because they are substantially faster in the process of coming to conclusions compared to large research organizations such as AAAS, NAS and IPCC. This person believes that this renders the smaller instances more reliable since they probably research upon newer data, and that the data used by AAAS/NAS/IPCC is outdated. This person thus believes that there actually is debate still going on amongst credible sources.
I have told this person about the rigorous peer-reviewing that reports from AAAS go through. I have also told him that this organization seldom, if ever, makes sensationalist claims until they are absolutely sure that their findings are as close to the truth as they get, thus rendering them a very credible source compared to small research groups and think tanks. Furthermore I have told this person my personal belief that I would rather believe the findings of the “Max Planck Institute for Solar System Research” than the “Swedish Defence Research Agency” when it comes to questions regarding the suns involvement of heating the earth.
I have given him other arguments as well, but he still chooses to believe these small and, in my view, less credible research instances mainly because they are small and thereby fast and more up to date. Is there any valid and reasonable argument against this claim that AAAS, NAS and IPCC are less credible since they are too slow? Is there any way to find out?
Furthermore, I do not recall that Wonderingmind42 ever mentioned the UN climate panel, the IPCC, as a credible source. Where in the credibility spectrum would they be placed?
Linus Leonard, Swedish Composer and Musician
11-23-2007, 03:44 PM
While it is, true that smaller research groups and think tanks can be looking at newer "research" and therefore might have a "valid" argument but there is greater probability that their research may be flawed because of statistics.
As an example, if you flip a coin 10 times and it comes up with side A 8 out of the ten times based on that research there is a 80% chance for side A. However if you have 10,000 trials of flipping the coin 10 times you will come up with about a 50% chance. So even if you have new evidence of the 80% chance and add it in to the 10,000 trials it makes almost no difference until a substantial amount of trials start coming out with the 80% option.
Hope this helps I will think about this more and get back to you.
11-24-2007, 12:31 AM
I work in academia, although in an entry-level position (a research assistant; my degree's in physics but my potential graduate work's in cognitive science, so I need to take a few courses and work in the field before I'm accepted). While I'm closer to the "common person" category, I'd like to think I'm getting closer to "professional individual" every day. I've been published (I'll provide citations if you want) and have had to work countless times on trying to get published in others. You have little reason to weight my response very highly, but I'd like to share my personal experience with two papers, just to demonstrate. (Note, my speciality is experimental psychology and psychological modelling, not climate science, so it's a bit different, but the general idea is similar.)
One of our papers did groundbreaking work in combining several distinct fields, including mathematics, animal learning, computer science, economics, and decision theory by demonstrating something that no one thought was possible. It was a small paper, so we aimed high -- Nature itself. It was rejected without review, because the paper was apparently "too specialized" for a general readership, and not terribly important enough to be published on that merit. We then tried Nature Neuroscience, a smaller journal with a more focused reader base. It was also rejected, for the same reasons. A few stages down the line and we're working with Behavioural Processes, a much smaller journal but one that believes its subscribers would benefit from learning about our research.
From this, I learned that larger journals need *really* convincing arguments that are either very broad in their interest or very universal in their impact. I have no personal experience with Nature's review process, though, but that's been mentioned already.
Speaking of review processes, sometimes the smaller journals get it wrong. They know they're competing against other journals in their size bracket, so they rush to publish what they see as big news. For instance, one of the journals that competes with Behavioural Processes (it's actually a bigger journal but less specialized) is JEP:ABP (Journal of Experimental Psychology: Animal Behavioural Processes). This journal is actually a pretty decent journal, but they were still looking for publications, so they accepted and published a particular paper by another researcher on a model of a task that had never been modelled before. The problem is, in their rush, they didn't check that model thoroughly: The math is fundamentally flawed, despite getting through peer review (I personally suspect that psychologists often aren't as skilled at math as they should be.). My professor spotted the error, and together we derived a fixed version of the model with proper math. In the end, though, our model only got published as a response to theirs -- meaning someone could cite their model without noticing the math is flawed simply by leaving out our response.
In my experience, though peer review is a gruelling process (let's face it, no one wants to be proven wrong and everyone in academia wants to prove everyone else is wrong, if you'll pardon the hyperbole), things can slip through the cracks. And smaller journals, with less prestige to lose and more to gain than the massive journals, may skimp on the hard review just a tad more than they should, meaning they're more likely to publish incorrect statements.
The more prestigous a journal (prestige being measured in number of citations), the more scientists will try to publish in it, which means more competition, which means stricter controls. Sometimes, something slips through the cracks, but that happens *far* more often on smaller papers than more prestigious ones.
A very similar effect happens when you compare single peer-reviewed journals to the large professional organizations of science. Just as Nature has a HUGE reputation to back up, the NAS, AAAS, AMS, and more have not only their reputation but their very expertise to stake here (if the American Meteorological Society screws up in climate science, that's one really big OOPS). Meanwhile, a peer-reviewed journal has less to lose and more to gain the smaller it gets.
This is why it takes a while for stuff to pass muster in the bigger societies: More people need to be satisified, more people need to agree, and all the kinks need to be worked out. Smaller journals don't have as much of a budget for editorial processes or as large a readership to poll for peer review, plus they can afford to be less picky (there's only a handful of papers at the level of Nature, Science, or the like, but there's incredibly more when you look in the more specialized fields).
11-25-2007, 07:51 PM
Tbolger and Tempest Stormwind: I thank you both for your insightful answers. I will try these arguments on the skeptic in question, and see if it is enough to change his mind :)
08-01-2008, 07:50 PM
The institutions of science are very much like a lumbering behemoth, slow to move, slow to change direction. The peer review process does slow considerably the spreading of new concepts, ideas and methodologies.
This the necessary price for ensuring that what does get through is of high quality and likely to be correct. There is so much new information that to publish the lot would slow the advancement of science. There needs to be a process of sorting the gold from the trash.
Science has become incredibly specialised, splitting into smaller and smaller fields. The peer review process does tend to ignore analysis from outside their field. Many years ago I saw what I thought was a compelling argument by some geologists that the body of the sphinx was much older than accepted archaeology stated. As far as I can see this theory went away without ever being properly considered by the archaeological community. Thus the theory was never tested by others, not proved true but also not proved false.
I would be confident accepting that an idea was wrong if it was historically accepted science that the institutions of science had moved away from, compared to an idea that had not yet been considered. You have to soundly demonstrate that your new theory better explains the data than existing theories.
Stormwind argues that good science will always get through I am not so sure. I suspect that some good science has been lost along the way, but some ignored science does eventually get through. Clive Tombaugh was eventually shown correct on many of his crazy thoeries, Hannes Alfven's theories are being taken into account.
Which way have the institutions of science been moving on the issue of climate change? Acepted science has gone from man cannot change climate, to that climate change was going to be a problem far into the future, to climate change will soon be a problem and now to climate change is here.
The trend in science is away from the position of the deniers. Their position is one that the science community has carefully considered and moved on from.
vBulletin® v3.7.3, Copyright ©2000-2013, Jelsoft Enterprises Ltd.