Bucket list: Historic global ocean temperature data – the missing pedigree is a comedy of errors
By Hartmut Hoecht
This paper addresses the glaring disregard within the scientific community in addressing thermal data errors of global ocean probing. Recent and older records are compiled with temperatures noted in small fractions of degrees, while their collection processes often provide errors much greater than full degrees. As a result of coarse methods the very underpinning of the historic and validity of the present database is questioned, as well as the all-important global temperature predictions for the future.
Follow me into the exploration of how the global ocean temperature record has been collected.
First, a point of reference for further discussion:
Wikipedia, sourced from NASA Goddard, has an illustrative graph of global temperatures with an overlay of five-year averaging. Focus on the cooling period between mid 1940’s and mid 1970’s. One can glean the rapid onset of the cooling, which amounted to approximately 0.3 degree C in 1945, and its later clearly discernible reversal.
The May 2008 paper by Thompson et al in Nature got my attention. Gavin Schmidt of RealClimate and also the New Scientist commented on it. Thompson claims he found the reason why the 1940’s cooling started with such a drastic reversal from the previous heating trend, something that apparently had puzzled scientists for a long time. The reason for this flip in temperature was given as the changeover in methods in collecting ocean temperature data. That is, changing the practice of dipping sampling buckets to reading engine cooling water inlet temperatures.More details
Let us look closer at this cooling period.
Before and after WWII the British fleet used the ‘bucket method’ to gather ocean water for measuring its temperature. That is, a bucket full of water was sampled with a bulb thermometer. The other prevalent probing method was reading ships’ cooling water inlet temperatures.
These two data gathering methods are explained in a 2008 letter to Nature, where Thompson et al coined the following wording (SST = sea surface temperature):
“The most notable change in the SST archive following December 1941 occurred in August 1945. Between January 1942 and August 1945 ~80% of the observations are from ships of US origin and ~5% are from ships of UK origin; between late 1945 and 1949 only ~30% of the observations are of US origin and about 50% are of UK origin. The change in country of origin in August 1945 is important for two reasons: first, in August 1945 US ships relied mainly on engine room intake measurements whereas UK ships used primarily uninsulated bucket measurements, and second, engine room intake measurements are generally biased warm relative to uninsulated bucket measurements.”
Climate watchers had detected a bump in the delta between water and night time atmospheric temperatures (NMAT). Therefore it was thought that they had found the culprit for the unexplained 0.3 degree C flip in temperatures around 1945, which invited them to tweak and modify the records. The applied bias corrections, according to Thompson, “might increase the century-long trends by raising recent SSTs as much as ~0.1 deg. C”.
Supposedly this bias correction was prominently featured in the IPCC 2007 summary for policy makers. The question to ask – how was Thompson’s 0.1 degree correction applied – uniformly? Over what time period? Is the number of 0.1 degree a guess and arbitrarily picked? Does it account for any measurement errors, those discussed further in this paper?
A fundamental question arises how authentic is our temperature record? How confidently can we establish trend lines and future scenarios? It is obvious that we need to know the numbers within 0.1 degree C accuracy to allow sensible interpretation of the records and to believably project future global temperatures.
We shall examine the methods of measuring ocean water temperatures in detail, and we will discover that the data record is very coarse, often by much more than an order of magnitude!
First, it is prudent to present the fundamentals, in order to establish a common understanding.
2. The basics
We can probably agree that, for extracting definite trends from global temperature data, the resolution of this data should be at least within +/-0.1 degree C. An old rule of thumb for any measurement cites the instrument having better than three times the accuracy of the targeted resolution. In our case it is +/0.03 degree C.
Thermal instrument characteristics define error in percent of full range. Read-out resolution is another error, as is read-out error. Example: assume a thermometer has a range of 200 degrees , 2 % accuracy is 4 degrees, gradation (resolution) is 2 degrees. Read-out accuracy may be 0.5 degrees, to be discerned between gradations, if you ‘squint properly’.
Now let us turn to the instruments themselves.
Temperature is directly measured via bulb and capillary function liquid based thermometers, bimetal dial gages and, with the help of electronics, the thermistors, as well as a few specialized methods. Via satellite earth temperatures are interpreted indirectly from infrared radiative reflection.
We are all familiar with the capillary and liquid filled thermometers we encounter daily. They and bimetal based ones have typical accuracy of +/-1 to 2 percent and are no more accurate than to 0.5 degree C. The thermistors can be highly accurate upon precise calibration.
Temperature data collection out in nature focuses not just on the thermal reading device by itself. It is always a system, which comprises to various extent the data collection method, temporary data storage, data conversion and transmission, manipulation, etc. Add to this the variations of time, location, of protocol, sensor drift and sensor deterioration, variations in the measured medium, different observers and interpreters, etc. All of this contributes to the system error. The individual error components are separated into the fixed known – errors and the random – variable errors. Further, we must pay attention to the ‘significant figures’ when assessing error contributors, since a false precision may creep in, when calculation total error.
Compilation of errors:
All errors identified as random errors are combined using the square root of the sum of the squares method (RSS). Systematic errors are combined by algebraic summation. The total inaccuracy is the algebraic sum of the total random error and total systematic error.
3. Predominant ocean temperature measuring methods
a. Satellite observations
Data gathering by infrared sensors on NOAA satellites has been measuring water surface temperatures with high-resolution radiometers.
Measurements are indirect and must take into account the uncertainties associated with other parameters, which can be only poorly estimated. Only the top-most water layer can be measured, which can induce a strong diurnal error. Satellite measurements came into use around the 1970’s, so they have no correlation to earlier thermal profiling. However, since they are indirect methods of reading ocean temperature and have many layers of correction and interpretation with unique error aspects, they are not investigated here. Such an effort would warrant a whole other detailed assessment.
b. Direct water temperature measurements – historical
Dipping of a thermometer in a bucket of a collected water sample was used prior to the need for understanding the thermal gradients down in deeper portions of the ocean. The latter became vital for submarine warfare, before and during WWII. Earlier, water temperature knowledge was needed only for meteorological forecasting.
The ‘Bucket Method’:
Typically a wooden bucket was thrown from a moving ship, hauled aboard, and a bulb thermometer was dipped in this bucket to get the reading. After the earlier wooden ones, canvas buckets became prevalent.
With greater attention to accuracy insulated canvas buckets were introduced. In addition to typical thermometer accuracy of 0.5 to 1.0 degree C, highly variable random error sources were:
- depth of bucket immersion
- surface layer mixing from variable wave action and solar warming
- churning of surface layer from ship’s wake
- variable differential between water and air temperature
- relative direction between wind and ship’s movement
- magnitude of combined wind speed and ship’s speed
- time span between water collection and actual thermal readings
- time taken to permit thermometer taking on water temperature
- degree of thermometer stirring (or not)
- reading at the center vs. edge of the bucket
- thermometer wet bulb cooling during reading of temperature
- optical acuity and attitude of operator
- effectiveness of bucket insulation and evaporative cooling effects of leakage
- thermal exchange between deck surface and bucket
Here is an infrared sequence showing how water in a canvas bucket cools.
Figures above from this paper:
ABSTRACT: Uncertainty in the bias adjustments applied to historical sea-surface temperature (SST) measurements made using buckets are thought to make the largest contribution to uncertainty in global surface temperature trends. Measurements of the change in temperature of water samples in wooden and canvas buckets are compared with the predictions of models that have been used to estimate bias adjustments applied in widely used gridded analyses of SST. The results show that the models are broadly able to predict the dependence of the temperature change of the water over time on the thermal forcing and the bucket characteristics: volume and geometry; structure and material. Both the models and the observations indicate that the most important environmental parameter driving temperature biases in historical bucket measurements is the difference between the water and wet-bulb temperatures. However, assumptions inherent in the derivation of the models are likely to affect their applicability. We observed that the water sample needed to be vigorously stirred to agree with results from the model, which assumes well-mixed conditions. There were inconsistencies between the model results and previous measurements made in a wind tunnel in 1951. The model assumes non-turbulent incident flow and consequently predicts an approximately square-root dependence on airflow speed. The wind tunnel measurements, taken over a wide range of airflows, showed a much stronger dependence. In the presence of turbulence the heat transfer will increase with the turbulent intensity; for measurements made on ships the incident airflow is likely to be turbulent and the intensity of the turbulence is always unknown. Taken together, uncertainties due to the effects of turbulence and the assumption of well-mixed water samples are expected to be substantial and may represent the limiting factor for the direct application of these models to adjust historical SST observations.
Engine Inlet Temperature Readings:
A ship’s diesel engine uses a separate cooling loop, to isolate the metal from the corrosive and fouling effect of sea water. The raw water inlet temperature is most always measured via gage (typical 1 degree C accuracy and never recalibrated), which is installed between the inlet pump and the heat exchanger. There would not be a reason to install thermometers directly next to the hull without data skewing engine room heat-up, a location which would otherwise be more useful for research purposes.
Random measurement influences to be considered:
- depth of water inlet
- degree of surface layer mixing from wind and sun
- wake churning of ship, also depending on speed
- differential between water and engine room temperature
- variable engine room temperature
Further, there are long-term time variable system errors to be considered:
- added thermal energy input from the suction pump
- piping insulation degradation and from internal fouling.
c. Deployed XBT (Expendable Bathythermograph) Sondes
The sonar sounding measurements and the need for submarines hiding from detection in thermal inversion layers fueled the initial development and deployment of the XBT probes. They are launched from moving surface ships and submarines. Temperature data are sent to aboard the ship via unspooling of a thin copper wire from the probe, while it takes on a slightly decreasing descending speed. Temperature is read by a thermistor; processing and recording of data is done with shipboard instruments. The depth is calculated by the elapsed time, and based on a manufacturer’s specific formula. Thermistor readings are recorded via voltage changes of the signal.
Random errors introduced:
- thermal lag effect of probe shipboard storage vs. initial immersion temperature
- storage time induced calibration drift
- dynamic stress plus changing temperature effect on copper wire resistance during descent
- variable thermal lag effect on thermistor during descent
- variability of surface temperature vs. stability of deeper water layers
- instrument operator induced variability.
d. ARGO Floating Buoys
The concept of these submersed buoys is a clever approach to have nearly 4000 such devices worldwide autonomously recording water data, while drifting at different ocean depths. Periodically they surface and batch transmit the stored time, depth (and salinity) and temperature history via satellite to ground stations for interpretation. The prime purpose of ARGO is to establish a near-simultaneity of the fleet’s temperature readings down to 2500 meters in order to paint a comprehensive picture of the heat content of the oceans.
Surprisingly, the only accuracy data given by ARGO manufacturers is the high precision of 0.001 degree C of their calibrated thermistor. Queries with the ARGO program office revealed that they have no knowledge of a system error, an awareness one should certainly expect from this sophisticated scientific establishment. An inquiry about various errors with the predominant ARGO probe manufacturer remains unanswered.
In fact, the following random errors must be evaluated:
- valid time simultaneity in depth and temperature readings, since the floats move with currents and may transition into different circulation patterns, i.e. different vertical and lateral ones
- thermal lag effects
- invalid readings near the surface are eliminated; unknown extent of depth of the error
- error in batch transmissions from the satellite antenna due to wave action
- error in two-way transmissions, applicable to later model floats
- wave and spray interference with high frequency 20/30 GHz IRIDIUM satellite data transmission
- operator and recordation shortcomings and misinterpretation of data
Further, there are unknown resolution, processor and recording instrument systems errors, such as coarseness of the analog/digital conversion, any effects of aging float battery output, etc. Also, there are subtle characteristics between different float designs, which are probably hard to gage without detailed research.
e. Moored Ocean Buoys
Long term moored buoys have shortcomings in vertical range, which is limited by their need to be anchored, as well as in timeliness of readings and recordings. Further, they are subject to fouling and disrupted sensor function from marine biological flora. Their distribution is tied to near shore anchorage. This makes them of limited value for general ocean temperature surveying.
f. Conductivity, Temperature, Depth (CTD) Sondes
These sondes present the most accurate method of water temperature readings at various depths. They typically measure conductivity, temperature (also often salinity) and depth together. They are deployed from stationary research ships, which are equipped with a boom crane to lower and retrieve the instrumented sondes. Precise thermistor thermal data at known depths are transmitted in real-time via cable to onboard recorders. The sonde measurements are often used to calibrate other types of probes, such as XBT and ARGO, but the operational cost and sophistication as a research tool keeps them from being used on a wide basis.
4. Significant Error Contributions in Various Ocean Thermal Measurements
Here we attempt to assign typical operational errors to the above listed measuring methods.
a. The Bucket Method
Folland et al in the paper ‘Corrections of instrumental biases in historical sea surface temperature data’ Q.J.R. Metereol. Soc. 121 (1995) attempted to quantify the bucket method bias. The paper elaborates on the historic temperature records and variations in bucket types. Further, it is significant that no marine entity has ever followed any kind of protocol in collecting such data. The authors of this report undertake a very detailed heat transfer analysis and compare the results to some actual wind tunnel tests (uninsulated buckets cooling by 0.41 to 0.46 degree C). The data crunching included many global variables, as well as some engine inlet temperature readings. Folland’s corrections are +0.58 to 0.67 degree C for uninsulated buckets and +0.1 to 0.15 degrees for wooden ones.
Further, Folland et al (1995) state “The resulting globally and seasonally averaged sea surface temperature corrections increase from 0. 11 deg. C in 1856 to 0.42 deg. C by 1940.”
It is unclear why the 19th century corrections would be substantially smaller than those in the 20th century. Yet, that could be a conclusion from the earlier predominance of the use of wooden buckets (being effectively insulating). It is also puzzling, how these numbers correlate to Thompson’s generalizing statement that recent SSTs should be corrected via bias error “by as much as ~0.1 deg. C”? What about including the 0.42 degree number?
In considering a system error – see 3b. – the variable factors of predominant magnitude are diurnal, seasonal, sunshine and air cooling, spread of water vs. air temperature, plus a fixed error of thermometer accuracy of +/- 0.5 degree C, at best. Significantly, bucket filling happens no deeper than 0.5 m below water surface, hence, this water layer varies greatly in diurnal temperature.
Tabatha, 1978, says about measurement on a Canadian research ship – “bucket SSTs were found to be biased about 0.1°C warm, engine-intake SST was an order of magnitude more scattered than the other methods and biased 0.3°C warm”. So, here both methods measured warm bias, i.e. correction factors would need to be negative, even for the bucket method, which are the opposite of the Folland numbers.
Where to begin in assigning values to the random factors? It seems near impossible to simulate a valid averaging scenario. For illustration sake let us make an error calculation re. a specific temperature of the water surface with an uninsulated bucket.
Air cooling 0.50 degrees (random)
Deckside transfer 0.05 degrees (random)
Thermometer accuracy 1.0 degrees (fixed)
Read-out and parallax 0.2 degrees (random)
Error e = 1.0 + (0.52 + 0.052 + 0.22)1/2 = 1.54 degrees or 51 times the desired accuracy of 0.03 degrees (see also section 2.0).
b. Engine intake measurements
Saur 1963 concludes : “The average bias of reported sea water temperatures as compared to sea surface temperatures , with 95% confidence limits, is estimated t be 1.2 +/-0.6 deg F (0.67 +/-0.3 deg C) on the basis of of a sample of 12 ships. The standard deviation is estimated to be 1.6 deg F (0.9 deg C)….. The ship bias (average bias of injection temperatures from a given ship) ranges from -0.5 deg F to 3.0 deg F (0.3 deg C to 1.7 deg C) among 12 ships.”
Errors in engine-intake SST depend strongly on the operating conditions in the engine room (Tauber 1969).
James and Fox (1972) show that intakes at 7m depth or less showed a bias of 0.2 degree C and deeper inlets had biases of 0.6 degree C.
Walden, 1966, summarizes records from many ships as reading 0.3 degree C too warm. However, it is doubtful that accurate instrumentation was used to calibrate readings. Therefore, his 0.3 degree number is probably derived crudely with shipboard standard ½ or 1 degree accuracy thermometers.
Let us make an error calculation re. a specific water temperature at the hull intake.
Thermometer accuracy 1.0 degree (fixed)
Ambient engine room delta 0.5 degree (random)
Pump energy input 0.1 degree (fixed)
Total error 1.6 degree C or 53 times the desired accuracy of 0.03 degrees.
One condition of engine intake measurements that differs strongly from the bucket readings is the depth below water surface at which measurements are collected, i.e. several meters down from the surface. This method provides cooler and much more stable temperatures as compared to the buckets nipping water and skipping along the top surface. However, engine room temperature appears highly influential. Many records are only given in full degrees, which is the typical thermometer resolution.
But, again the only given fixed error is the thermometer accuracy of one degree, as well as the pump energy heating delta. The variable errors may be of significant magnitude on larger ships and they are hard to generalize.
All these wide variations makes the engine intake record nearly impossible to validate. Further, the record was collected at a time when research based calibration was hardly ever practiced.
c. XBT Sondes
Lockheed-Martin (Sippican) produces several versions, and they advertise temperature accuracy of +/- 0.1 degree C and a system accuracy of +/- 0.2 degree C. Aiken (1998) determined an error of +0.5 and in 2007 Gouretski found an average bias of +0.2 to 0.4 degree C. The research community also implies accuracy variations with different data acquisition and recording devices and recommends calibration via parallel CTD sondes. There is a significant variant in the depth – temperature correlation, about which the researchers still hold workshops to shed light on.
We can only make coarse assumptions for the total error. Given the manufacturer’s listed system error of 0.2 degrees and the range of errors as referenced above, we can pick a total error of, say, 0.4 degree C, which is 13 times the desired error of 0.03 degrees.
d. ARGO Floats
The primary US based float manufacturer APEX Teledyne boasts a +/-0.001 degree C accuracy, which is calibrated in the laboratory (degraded to 0.002 degrees with drift). They have confirmed this number after retrieving a few probes after several years of duty. However, there is no system accuracy given, and the manufacturer stays mum to inquiries. The author’s communication with the ARGO program office reveals that they don’t know the system accuracy. This means they know the thermistor accuracy and nothing beyond. The Seabird Scientific SBE temperature/salinity sensor suite is used within nearly all of the probes, but the company, upon email queries, does not reveal its error contribution. Nothing is known about the satellite link errors or the on-shore processing accuracies.
Hadfield (2007) reports a research ship transection at 36 degree North latitude with CTD measurements. They have compared them to ARGO float temperature data on both sides of the transect and they are registered within 30 days and sometimes beyond. Generally the data agree within 0.6 degree C RMS and a 0.4 degree C differential to the East and 2.0 degree to the West. These readings do not relate to the accuracy of the floats themselves, but relate to limited usefulness of the ARGO data, i.e. the time simultaneity and geographical location of overall ocean temperature readings. This uncertainty indicates significant limits for determining ocean heat content, which actually is the raison d’etre for ARGO.
The ARGO quality manual for CTD’s and transectory data does spell out the flagging of probably bad data. The temperature drift between two average values is to be max. 0.3 degrees as fail criterion, the mean at 0.02 and a minimum 0.001 degrees.
Assigning an error budget to ARGO float readings certainly means a much higher than the manufacturer’s stated value of 0.002 degrees.
So, let’s try:
Allocated system error 0.01 degree (assume fixed)
Data logger error and granularity 0.01 (fixed)
Batch data transmission 0.05 (random)
Ground based data granularity 0.05 (fixed)
Total error 0.12 degrees, which is four times the desired accuracy of 0.03 degrees.
However, the Hadfield correlation to CTD readings from a ship transect varies up to 2 degrees, which is attributable to time and geographic spread and some amount of ocean current flow and seasonal thermal change, in addition to the inherent float errors.
The NASA Willis 2003/5 ocean cooling episode:
This matter is further discussed in the summary section 5.
In 2006 Willis published a paper, which showed a brief ocean cooling period, based on ARGO float profile records.
“Researchers found the average temperature of the upper ocean increased by 0.09 degrees Celsius (0.16 degrees F) from 1993 to 2003, and then fell 0.03 degrees Celsius (0.055 degrees F) from 2003 to 2005. The recent decrease is a dip equal to about one-fifth of the heat gained by the ocean between 1955 and 2003.”
The 2007 paper correction tackles this apparent global cooling trend during 2003/4 by removing specific wrongly programmed (pressure related) ARGO floats and by correlating earlier warm biased XBT data, which, together, was said to substantially minimize the magnitude of this cooling event.
Willis’ primary focus was gaging the ocean heat content and the influences on sea level. The following are excerpts from his paper, with and without the correction text.
“The average uncertainty is about 0.01 °C at a given depth. The cooling signal is distributed over the water column with most depths experiencing some cooling. A small amount of cooling is observed at the surface, although much less than the cooling at depth. This result of surface cooling from 2003 to 2005 is consistent with global SST products [e.g. https://ift.tt/2Jekujg%5D. The maximum cooling occurs at about 400 m and substantial cooling is still observed at 750 m. This pattern reflects the complicated superposition of regional warming and cooling patterns with different depth dependence, as well as the influence of ocean circulation changes and the associated heave of the thermocline.
The cooling signal is still strong at 750 m and appears to extend deeper (Figure 4). Indeed, preliminary estimates of 0 – 1400 m OHCA based on Argo data (not shown) show that additional cooling occurred between depths of 750 m and 1400 m.”
“…. a flaw that caused temperature and salinity values to be associated with incorrect pressure values. The size of the pressure offset was dependent on float type, varied from profile to profile, and ranged from 2–5 db near the surface to 10–50 db at depths below about 400 db. Almost all of the WHOI FSI floats (287 instruments) and approximately half of the WHOI SBE floats (about 188 instruments) suffered from errors of this nature. The bulk of these floats were deployed in the Atlantic Ocean, where the spurious cooling was found. The cold bias is greater than −0.5°C between 400 and 700 m in the average over the affected data.
The 2% error in depth presented here is in good agreement with their findings for the period. The reason for the apparent cooling in the estimate that combines both XBT and Argo data (Fig. 4, thick dashed line) is the increasing ratio of Argo observations to XBT observations between 2003 and 2006. This changing ratio causes the combined estimate to exhibit cooling as it moves away from the warm-biased XBT data and toward the more neutral Argo values.
Systematic pressure errors have been identified in real-time temperature and salinity profiles from a small number of Argo floats. These errors were caused by problems with processing of the Argo data, and corrected versions of many of the affected profiles have been supplied by the float provider.
Here errors in the fall-rate equations are proposed to be the primary cause of the XBT warm bias. For the study period, XBT probes are found to assign temperatures to depths that are about 2% too deep.”
Note that Willis and co-authors estimated the heat content of the upper 750 meters. This zone represents about 20 percent of the global ocean’s average depth.
Further, the Fig. 2 given in Willis’ paper shows the thermal depth profile of one of the eliminated ARGO floats, i.e. correct vs. erroneous.
Averaging the error at specific depth we can glean a 0.15 degree differential, ranging from about 350 to 1100 m. That amount differs from the cold bias of -0.5 degree as stated above, albeit as defined within 400 and 700 meters.
All of these explanations by Willis are very confusing and don’t inspire confidence in any and all ARGO thermal measurement accuracies.
5. Observations and Summary
5.1 Ocean Temperature Record before 1960/70
Trying to extract an ocean heating record and trend lines during the times of bucket and early engine inlet readings seems a futile undertaking because of vast systematic and random data error distortion.
The bucket water readings were done for meteorological purposes only and
a. without quality protocols and with possibly marginal personnel qualification
b. by many nations, navies and merchant marine vessels
c. on separate oceans and often contained within trade routes
d. significantly, scooping up only from a thin surface layer
e. with instrumentation that was far cruder than the desired quality
f. subject to wide physical sampling variations and environmental perturbances
Engine inlet temperature data were equally coarse, due to
a. lack of quality controls and logging by marginally qualified operators
b. thermometers unsuitable for the needed accuracy
c. subject to broad disturbances from within the engine room
d. variations in the intake depth
e. and again, often confined to specific oceans and traffic routes
The engine inlet temperatures differed significantly from the uppermost surface readings,
a. being from a different depth layer
b. being more stable than the diurnally influenced and solar heated top layer
During transition in primary ocean temperature measuring methods it appears logical to find an upset of the temperature record around WWII due to the changes in predominant method of temperature data collection. However, finding a specific correction factor for the data looks more like speculation, because the two historic collection methods are just too disparate in characteristics and in what they measure. This appears to be a case of the proverbial apples and oranges comparison. However, the fact of a rapid 1945/6 cooling onset must still be acknowledged, because the follow-on years remained cool. Then, the warming curve in the mid-seventies started again abruptly, as the graph in section 1 shows so distinctly.
5.2 More recent XBT and ARGO measurements:
Even though the XBT record is of limited accuracy, as reflected in a manufacturer’s stated system accuracy of +/- 0.2 degrees, we must recognize its advantages and disadvantages, especially in view of the ARGO record.
a. they go back to the 1960’s, with an extensive record
b. ongoing calibration to CBT measurements is required because the calculation depth formulae are not consistently accurate
c. XBT data are firm in simultaneous time and geographical origin, inviting statistical comparisons
d. recognized bias of depth readings, which are relatable to temperature
a. useful quantity and distribution of float cycles exist only since around 2003
b. data accuracy cited as extremely high, but with unknown system accuracy
c. data collection can be referenced only to point in time of data transmission via satellite. Intermediary ocean current direction and thermal mixing during float cycles add to uncertainties.
d. calibration via CTD probing are hampered by geographical and time separation and have lead to temperature discrepancies of three orders of magnitude, i.e. 2.0 (Hadfield 2007) vs. 0.002 degrees (manufacturer’s statement)
e. programming errors have historically read faulty depth correlations
f. unknown satellite data transmission errors, possibly occurring with the 20/30 GHz frequency signal attenuation from wave action, spray and hard rain.
The ARGO float data, despite the original intent on creating a precise ocean heat content map, must be recognized within its limitation. The ARGO community knows not the float temperature system error.
NASA’s Willis research on the 2003/4 ocean cooling investigation leaves open many questions:
a. is it feasible to broaden the validity of certain ARGO cooling readings from a portion of the Atlantic to the database of the entire global ocean?
b. is it credible to conclude that a correction to a small percentage of erroneous depth offset temperature profiles across a narrow vertical range has sufficient thermal impact on the mass of all the oceans, such as to claim global cooling or not?
c. Willis argues that the advent of precise ARGO readings around year 2003 vs. the earlier record of warm biased XBT triggered the apparent ocean cooling onset. However, this XBT warm bias was well known at that time. Could he not have accounted for this bias when comparing the before-after ARGO rise to prominence?
d. Why does the stated cold bias of -0.5 degree C given by Willis differ significantly from the 0.15 degree differential that can be seen in his graph?
The subject Willis et al research appears to be an example of inappropriate mixing two different generic types of data (XBT and ARGO) and it concludes, surprisingly
a. the prior to 2003 record should be discounted for apparent cooling upset and
b. this prior to 2003 record should still be a valid one for determining the trend line of ocean heat content.
Willis states ”… surface cooling from 2003 to 2005 is consistent with global SST products” (SST = sea surface temperature). This statement contradicts the later conclusion that the cool-down was invalidated by the findings of his 2007 corrections.
Further, Willis’manipulation of float meta data together with XBT data is confusing enough to make one question his conclusions in the correction paper of 2007.
- It appears that the historic record of ‘bucket’ and ‘engine inlet’ temperature record should be regarded as anecdotal, historic and overly coarse, rather than to serve the scientific search for ocean warming trends. With the advent of XBT and ARGO probes the trend lines read more accurately, but they are often obscured by instrument systems error.
- An attempt of establishing the ocean temperature trend must consider the magnitude of random errors vs. known system errors. If one tries to find a trend line in a high random error field the average value may find itself at the extreme of the error band, not at its center.
- Ocean temperature should be discernable to around 0.03 degrees C precision. However, system errors often exceed this precision by up to three orders of magnitude!
- If NASA’s Willis cited research is symptomatic for the oceanic science community in general, the supposed scientific due diligence in conducting analysis is not to be trusted.
- With the intense focus on statistical manipulation of meta data and of merely what’s on the computer display many scientists appear to avoid exercising due prudence in appropriately factoring in data origin for it accuracy and relevance. Often there is simply not enough background about the fidelity of raw data to make realistic estimates of error, and neither for drawing any conclusions as to whether the data is appropriate for what it is being used.
- Further, the basis of statistical averaging means averaging the same thing. Ocean temperature data averaging over time cannot combine averages from a record of widely varying data origin, instruments and methods. Their individual pedigree must be known and accounted for!
Lastly, a proposal:
Thompson, when identifying the WWII era temperature blip between bucket and engine inlet records, had correlated them to NMAT, i.e. the night time temperatures for each of the two collection methods. NMATs may be a way – probably to a limited degree – to validate and tie together the disparate record all the way from the 19th century to future satellite surveys.
This could be accomplished by research ship cruises into cold, temperate and hot ocean zones, measuring simultaneously, i.e. re. time and location, representative bucket and engine inlet temperatures and deploying a handful of XBT and ARGO sondes, while CBT probing at the same time. And all these processes are to be correlated under similar night time conditions and with localized satellite measurements. With this cross-calibration one could back trace the historical records and determine the delta temperatures to their NMATs, where such a match can be found. That way NMATs could be made the Rosetta Stone to validating old, recent and future ocean recordings. Thereby past and future trends can be more accurately assessed. Surely funding this project should be easy, since it would correlate and cement the validity of much of ocean temperature readings, past and future.
via Watts Up With That? https://ift.tt/1Viafi3