I am a little suspicious of May's northern hemisphere curve. Other reconstructions I have seen had distinct signals for the Medieval Warm Period and the Roman Warm Period, and his doesn't. There are historical data for these events that don't rely on proxy measures, and I think most reasonable people accept that something was going on at those times in the NH, climate-wise.
I'm just pointing out the difficulty in using proxy data to prove or disprove the magnitude of the Holocene Climatic Optimum. We're making a lot of expensive decisions based on modeling, and any model that doesn't predict the HCO should have us asking why not.
So i went and read a bit of
the paper (Markott et al) where May got his latitude curves.
Marrkott et al spend their paper describing the methods and statistics they used, and they compare their curve to two "famous" ones: Mann's tree-ring-to-data -based "hockey stick", and another tree ring study.
Here's a sliced-and-diced-and-annotated (by me, such as adding the MWP pointers) selection from their first figure (the A,C,E etc described in the quote text below):
They write:
----
We took the 5° × 5° area-weighted mean of the 73 records to develop a global temperature stack for the Holocene (referred to as the Standard5×5 reconstruction) (
Fig. 1, A and B). To compare our Standard5×5 reconstruction with modern climatology, we aligned the stack's mean for the interval 510 to 1450 yr B.P. (where yr B.P. is years before 1950 CE) with the same interval's mean of the global Climate Research Unit error-in-variables (CRU-EIV) composite temperature record (
2), which is, in turn, referenced to the 1961–1990 CE instrumental mean (
Fig. 1A). We then assessed the sensitivity of the temperature reconstruction to several averaging schemes, including an arithmetic mean of the data sets, a 30° × 30° area-weighted mean, a 10° latitudinal weighted mean, and a calculation of 1000 jackknifed stacks that randomly exclude 50% of the records in each realization (
Fig. 1, C and D, and fig. S4). Although some differences exist at the centennial scale among the various methods (
Fig. 1, C and D), they are small (<0.2°C) for most of the reconstructions, well within the uncertainties of our Standard5x5 reconstruction, and do not affect the long-term trend in the reconstruction.
In addition to the previously mentioned averaging schemes, we also implemented the RegEM algorithm (
11) to statistically infill data gaps in records not spanning the entire Holocene, which is particularly important over the past several centuries (
Fig. 1G). Without filling data gaps, our Standard5×5 reconstruction (
Fig. 1A) exhibits 0.6°C greater warming over the past ~60 yr B.P. (1890 to 1950 CE) than our equivalent infilled 5° × 5° area-weighted mean stack (
Fig. 1, C and D). However, considering the temporal resolution of our data set and the small number of records that cover this interval (
Fig. 1G), this difference is probably not robust. Before this interval, the gap-filled and unfilled methods of calculating the stacks are nearly identical (
Fig. 1D).
-----
and
---------
Our global temperature reconstruction for the past 1500 years is indistinguishable within uncertainty from the Mann
et al. (
2) reconstruction; both reconstructions document a cooling trend from a warm interval (~1500 to 1000 yr B.P.) to a cold interval (~500 to 100 yr B.P.), which is approximately equivalent to the Little Ice Age (
Fig. 1A). This similarity confirms that published temperature reconstructions of the past two millennia capture long-term variability, despite their short time span (
3,
12,
13). Our median estimate of this long-term cooling trend is somewhat smaller than in Mann
et al. (
2) though, which may reflect our bias toward marine and lower-latitude records.
-----------
So Markott's "more global" (since they included marine sampling) approach diluted Mann's land-only tree-ring spikes and bumps.
Markott also gridded the planet to 5 degree "squares"(not uncommon in climate modeling) and use long averages. The "formal" definition for the MWP is only 300 years (roughly 950 to 1250) ... which is only a "pixel or two" in the longer time-span that Markott was profiling.
Mann only cared about a 1500-year look-back, Markott was going longer (their 11,000 year curves were the B,D,F, etc set).
May (etc) didn't include the error bars (which is only a "one sigma" band), but it's the error range that lets Markott say (as he does) that his curve is "indistinguishable within uncertainty" to Mann's.
I'm
not saying either (Mann's or Markott's) is "more correct", but they do describe how they got to where they got, and their papers include access to their background data ... so anyone "serious about the subject" can make use of it.
People doing serious research read the papers ... and then they chase down the references cited by those papers ... and then they chase down
those references references AND they get copies of the data along the way (in my old job i'd prepare the data tapes and disks for such requests, and deal with ingesting to our system the tapes/disks from other sites (pre-internet)).
Us, as the "general public" on the outside looking in rarely go to the lengths that the people in the field of study do. But they do (in any scientific field) so that they won't make absurd (in their colleagues' view) errors from lack of chasing down the fine details.
That's how the system works, and it's how simple errors, systemic errors or downright falsifications get rooted out.
--dick