Interpolation interpretation – completed with that?

Guest contribution by Willis Eschenbach

In the comments section of a post on a completely different topic, you can find a debate there about interpolation for areas where you have no data. Let me give you a few examples, names left out.

Kriging is nothing more than a spatially weighted averaging process. Interpolated data therefore show less variance than the observations.

The idea that interpolation could be better than observation is absurd. You only know things that you measure.

I am not saying that interpolation is better than observation. I am saying that interpolation with a locality-based approach is better than one that uses a global approach. Do you have a different opinion?

I disagree, in general, interpolation in the context of global temperature doesn’t make things any better. For surface data sets, I have always preferred HadCRUT4 to others because it is not interpolated.

After interpolation, you analyze a mixture of data and model, not data. What you analyze then takes on as many properties of the model as the data. Bad.

How do you estimate the value of empty grid cells without doing some kind of interpolation?

NOT YOU! You tell people what you * know *. You don’t make up what you don’t know and try to pass it off as the truth.

If you only know the temperature for 85% of the earth, just say, “Our metric for 85% of the earth is this way and that. For the other 15% we don’t have good data and can only estimate the metric value. “

If you don’t have the measurements, you can’t guess about the missing data. When you do that, you make things up.

Hmmm … people who know me know that I prefer experimentation to theory. So I figured I’d see if I could fill in blank data and get a better answer than leaving the blank data untouched. Here is my experiment. I’ll start with the CERES estimate of the 2000-2020 average temperature.

Figure 1. Average surface temperature from CERES, 2000-2020

Note that the earth’s mean temperature is 15.2 ° C, the land is 8.7 ° C, and the ocean is 17.7 ° C. Also note that the Andes on the left side of Upper South America are much cooler than the other South American country.

Next I punch out some of the data. Figure 2 shows this result.

Figure 2. Average CERES surface temperature with data removed, 2000-2020

Note that with the missing data, the global mean temperatures are now cooler with the globe at 14.6 ° C versus 15.2 ° C for the full data, a significant error of about 0.6 ° C. Also the lands – and sea temperatures are too low at 1.3 ° C and 0.4 ° C, respectively.

Next, I use math analysis to fill in the hole. Here is the result:

Figure 3. Average CERES surface temperature with patched data, 2000-2020

Notice that the errors for land temperature, sea temperature, and global temperature have all decreased. In particular, the land error has increased from 1.4 ° C to 0.1 ° C. The estimate for the ocean is warm in some areas, as can be seen in Figure 3. However, the global average ocean temperature is still better than simply leaving out the data (0.1 ° C error instead of 0.4 ° C error).

My point here is simple. There are often times when you can use knowledge of the overall parameters of the system to improve the situation when data is missing.

And how did I create the patch to fill in the missing data?

Well … I think I’ll leave this unspecific at this point in order to be revealed later. Although I am sure that the WUWT readers will find out soon enough …

My best wishes to all,


PS: To avoid the misunderstandings that are the curse of the intarwebs, PLEASE quote the exact words you are talking about.

Like this:

To like Loading…

Comments are closed.