• AIPressRoom
  • Posts
  • An introduction to climate forecasting with deep studying

An introduction to climate forecasting with deep studying

With all that is occurring on the earth lately, is it frivolous to speak about climate prediction? Requested within the twenty firstcentury, that is sure to be a rhetorical query. Within the Nineteen Thirties, when German poet Bertolt Brecht wrote the well-known strains:

Was sind das für Zeiten, wo

Ein Gespräch über Bäume quick ein Verbrechen ist

Weil es ein Schweigen über so viele Untaten einschließt!

(“What sort of instances are these, the place a dialog about timber is sort of a criminal offense, for it means silence about so manyatrocities!”),

he couldn’t have anticipated the responses he would get within the second half of that century, with timber symbolizing, in addition toactually falling sufferer to, environmental air pollution and local weather change.

At present, no prolonged justification is required as to why prediction of atmospheric states is important: Resulting from international warming,frequency and depth of extreme climate circumstances – droughts, wildfires, hurricanes, heatwaves – have risen and canproceed to rise. And whereas correct forecasts don’t change these occasions per se, they represent important info inmitigating their penalties. This goes for atmospheric forecasts on all scales: from so-called “nowcasting” (working on avary of about six hours), over medium-range (three to 5 days) and sub-seasonal (weekly/month-to-month), to local weather forecasts(involved with years and many years). Medium-range forecasts particularly are extraordinarily essential in acute catastrophe prevention.

This publish will present how deep studying (DL) strategies can be utilized to generate atmospheric forecasts, utilizing a newly revealedbenchmark dataset(Rasp et al. 2020). Future posts might refine the mannequin used right hereand/or talk about the position of DL (“AI”) in mitigating local weather change – and its implications – extra globally.

That stated, let’s put the present endeavor in context. In a means, we’ve got right here the same old dejà vu of utilizing DL as ablack-box-like, magic instrument on a process the place human information was required. After all, this characterization isoverly dichotomizing; many decisions are made in creating DL fashions, and efficiency is essentially constrained by accessiblealgorithms – which can, or might not, match the area to be modeled to a enough diploma.

When you’ve began studying about picture recognition slightly not too long ago, chances are you’ll effectively have been utilizing DL strategies from the outset,and never have heard a lot in regards to the wealthy set of function engineering strategies developed in pre-DL picture recognition. Within thecontext of atmospheric prediction, then, let’s start by asking: How on the earth did they do this earlier than?

Numerical climate prediction in a nutshell

It isn’t like machine studying and/or statistics aren’t already utilized in numerical climate prediction – quite the opposite. Forinstance, each mannequin has to start out from someplace; however uncooked observations aren’t suited to direct use as preliminary circumstances.As an alternative, they should be assimilated to the four-dimensional grid over which mannequin computations are carried out. On thedifferent finish, particularly, mannequin output, statistical post-processing is used to refine the predictions. And really importantly, ensembleforecasts are employed to find out uncertainty.

That stated, the mannequin core, the half that extrapolates into the long run atmospheric circumstances noticed in the present day, is predicated on aset of differential equations, the so-called primitive equations,which might be because of the conservation legal guidelines of momentum,energy, andmass. These differential equations can’t be solved analytically;slightly, they should be solved numerically, and that on a grid of decision as excessive as attainable. In that mild, even deepstudying may seem as simply “reasonably resource-intensive” (dependent, although, on the mannequin in query). So how, then,may a DL strategy look?

Deep studying fashions for climate prediction

Accompanying the benchmark dataset they created, Rasp et al.(Rasp et al. 2020) present a set of notebooks, together with onedemonstrating the usage of a easy convolutional neural community to foretell two of the accessible atmospheric variables, 500hPageopotential and 850hPa temperature. Right here 850hPa temperature is the (spatially various) temperature at a repair atmospherictop of 850hPa (~ 1.5 kms) ; 500hPa geopotential is proportional to the (once more, spatially various) altituderelated to the strain stage in query (500hPa).

For this process, two-dimensional convnets, as normally employed in picture processing, are a pure match: Picture width and topmap to longitude and latitude of the spatial grid, respectively; goal variables seem as channels. On this structure,the time sequence character of the info is basically misplaced: Each pattern stands alone, with out dependency on both previous orcurrent. On this respect, in addition to given its dimension and ease, the convnet offered under is just a toy mannequin, meant tointroduce the strategy in addition to the appliance general. It might additionally function a deep studying baseline, together with twodifferent kinds of baseline generally utilized in numerical climate prediction launched under.

Instructions on how you can enhance on that baseline are given by latest publications. Weyn et al.(Weyn, Durran, and Caruana, n.d.), along with making use ofextra geometrically-adequate spatial preprocessing, use a U-Web-based structure as a substitute of a plain convnet. Rasp and Thuerey(Rasp and Thuerey 2020), constructing on a totally convolutional, high-capacity ResNet structure, add a key new procedural ingredient:pre-training on local weather fashions. With their technique, they can not simply compete with bodily fashions, but in addition, presentproof of the community studying about bodily construction and dependencies. Sadly, compute amenities of this orderaren’t accessible to the common particular person, which is why we’ll content material ourselves with demonstrating a easy toy mannequin.Nonetheless, having seen a easy mannequin in motion, in addition to the kind of information it really works on, ought to assist rather a lot in understanding howDL can be utilized for climate prediction.

Dataset

Weatherbench was explicitly created as a benchmark dataset and thus, as isfrequent for this species, hides quite a lot of preprocessing and standardization effort from the consumer. Atmospheric information can be foundon an hourly foundation, starting from 1979 to 2018, at totally different spatial resolutions. Relying on decision, there are about 15to twenty measured variables, together with temperature, geopotential, wind velocity, and humidity. Of those variables, some areaccessible at a number of strain ranges. Thus, our instance makes use of a small subset of obtainable “channels.” To save lots of storage,community and computational assets, it additionally operates on the smallest accessible decision.

This publish is accompanied by executable code on GoogleColaboratory, which mustn’t simplyrender pointless any copy-pasting of code snippets but in addition, enable for uncomplicated modification and experimentation.

To learn in and extract the info, saved as NetCDF information, we usetidync, a high-level bundle constructed on prime ofncdf4 and RNetCDF. In any other case,availability of the same old “TensorFlow household” in addition to a subset of tidyverse packages is assumed.

As already alluded to, our instance makes use of two spatio-temporal sequence: 500hPa geopotential and 850hPa temperature. Thefollowing instructions will obtain and unpack the respective units of by-year information, for a spatial decision of 5.625 levels:

download.file("https://dataserv.ub.tum.de/s/m1524895/obtain?path=%2F5.625degpercent2Ftemperature_850&information=temperature_850_5.625deg.zip",
              "temperature_850_5.625deg.zip")
unzip("temperature_850_5.625deg.zip", exdir = "temperature_850")

download.file("https://dataserv.ub.tum.de/s/m1524895/obtain?path=%2F5.625degpercent2Fgeopotential_500&information=geopotential_500_5.625deg.zip",
              "geopotential_500_5.625deg.zip")
unzip("geopotential_500_5.625deg.zip", exdir = "geopotential_500")

Inspecting a type of information’ contents, we see that its information array is structured alongside three dimensions, longitude (64totally different values), latitude (32) and time (8760). The information itself is z, the geopotential.

tidync("geopotential_500/geopotential_500hPa_2015_5.625deg.nc") %>% hyper_array()
Class: tidync_data (record of tidync information arrays)
Variables (1): 'z'
Dimension (3): lon,lat,time (64, 32, 8760)
Supply: /[...]/geopotential_500/geopotential_500hPa_2015_5.625deg.nc

Extraction of the info array is as straightforward as telling tidync to learn the primary within the record of arrays:

z500_2015 <- (tidync("geopotential_500/geopotential_500hPa_2015_5.625deg.nc") %>%
                hyper_array())[[1]]

dim(z500_2015)
[1] 64 32 8760

Whereas we delegate additional introduction to tidync to a complete blogpost on the ROpenSci web site, let’s at the very least take a look at a fast visualization, forwhich we decide the very first time level. (Extraction and visualization code is analogous for 850hPa temperature.)

image(z500_2015[ , , 1],
      col = hcl.colors(20, "viridis"), # for temperature, the colour scheme used is YlOrRd 
      xaxt = 'n',
      yaxt = 'n',
      foremost = "500hPa geopotential"
)

The maps present how strain and temperature strongly rely on latitude. Moreover, it’s straightforward to identify the atmosphericwaves:

Determine 1: Spatial distribution of 500hPa geopotential and 850 hPa temperature for 2015/01/01 0:00h.

For coaching, validation and testing, we select consecutive years: 2015, 2016, and 2017, respectively.

z500_train <- (tidync("geopotential_500/geopotential_500hPa_2015_5.625deg.nc") %>% hyper_array())[[1]]

t850_train <- (tidync("temperature_850/temperature_850hPa_2015_5.625deg.nc") %>% hyper_array())[[1]]

z500_valid <- (tidync("geopotential_500/geopotential_500hPa_2016_5.625deg.nc") %>% hyper_array())[[1]]

t850_valid <- (tidync("temperature_850/temperature_850hPa_2016_5.625deg.nc") %>% hyper_array())[[1]]

z500_test <- (tidync("geopotential_500/geopotential_500hPa_2017_5.625deg.nc") %>% hyper_array())[[1]]

t850_test <- (tidync("temperature_850/temperature_850hPa_2017_5.625deg.nc") %>% hyper_array())[[1]]

Since geopotential and temperature shall be handled as channels, we concatenate the corresponding arrays. To rework the infointo the format wanted for photographs, a permutation is critical:

train_all <- abind::abind(z500_train, t850_train, alongside = 4)
train_all <- aperm(train_all, perm = c(3, 2, 1, 4))
dim(train_all)
[1] 8760 32 64 2

All information shall be standardized in line with imply and customary deviation as obtained from the coaching set:

level_means <- apply(train_all, 4, imply)
level_sds <- apply(train_all, 4, sd)

round(level_means, 2)
54124.91  274.8

In phrases, the imply geopotential top (see footnote 5 for extra on this time period), as measured at an isobaric floor of 500hPa,quantities to about 5400 metres, whereas the imply temperature on the 850hPa stage approximates 275 Kelvin (about 2 levelsCelsius).

practice <- train_all
practice[, , , 1] <- (practice[, , , 1] - level_means[1]) / level_sds[1]
practice[, , , 2] <- (practice[, , , 2] - level_means[2]) / level_sds[2]

valid_all <- abind::abind(z500_valid, t850_valid, alongside = 4)
valid_all <- aperm(valid_all, perm = c(3, 2, 1, 4))

legitimate <- valid_all
legitimate[, , , 1] <- (legitimate[, , , 1] - level_means[1]) / level_sds[1]
legitimate[, , , 2] <- (legitimate[, , , 2] - level_means[2]) / level_sds[2]

test_all <- abind::abind(z500_test, t850_test, alongside = 4)
test_all <- aperm(test_all, perm = c(3, 2, 1, 4))

check <- test_all
check[, , , 1] <- (check[, , , 1] - level_means[1]) / level_sds[1]
check[, , , 2] <- (check[, , , 2] - level_means[2]) / level_sds[2]

We’ll try and predict three days forward.

Now all that continues to be to be carried out is assemble the precise datasets.

batch_size <- 32

train_x <- practice %>%
  tensor_slices_dataset() %>%
  dataset_take(dim(practice)[1] - lead_time)

train_y <- practice %>%
  tensor_slices_dataset() %>%
  dataset_skip(lead_time)

train_ds <- zip_datasets(train_x, train_y) %>%
  dataset_shuffle(buffer_size = dim(practice)[1] - lead_time) %>%
  dataset_batch(batch_size = batch_size, drop_remainder = TRUE)

valid_x <- legitimate %>%
  tensor_slices_dataset() %>%
  dataset_take(dim(legitimate)[1] - lead_time)

valid_y <- legitimate %>%
  tensor_slices_dataset() %>%
  dataset_skip(lead_time)

valid_ds <- zip_datasets(valid_x, valid_y) %>%
  dataset_batch(batch_size = batch_size, drop_remainder = TRUE)

test_x <- check %>%
  tensor_slices_dataset() %>%
  dataset_take(dim(check)[1] - lead_time)

test_y <- check %>%
  tensor_slices_dataset() %>%
  dataset_skip(lead_time)

test_ds <- zip_datasets(test_x, test_y) %>%
  dataset_batch(batch_size = batch_size, drop_remainder = TRUE)

Let’s proceed to defining the mannequin.

Fundamental CNN with periodic convolutions

The mannequin is a simple convnet, with one exception: As an alternative of plain convolutions, it makes use of barely extra refinedones that “wrap round” longitudinally.

periodic_padding_2d <- operate(pad_width,
                                title = NULL) {
  
  keras_model_custom(title = title, operate(self) {
    self$pad_width <- pad_width
    
    operate (x, masks = NULL) {
      x <- if (self$pad_width == 0) {
        x
      } else {
        lon_dim <- dim(x)[3]
        pad_width <- tf$forged(self$pad_width, tf$int32)
        # wrap round for longitude
        tf$concat(list(x[, ,-pad_width:lon_dim,],
                       x,
                       x[, , 1:pad_width,]),
                  axis = 2L) %>%
          tf$pad(list(
            list(0L, 0L),
            # zero-pad for latitude
            list(pad_width, pad_width),
            list(0L, 0L),
            list(0L, 0L)
          ))
      }
    }
  })
}

periodic_conv_2d <- operate(filters,
                             kernel_size,
                             title = NULL) {
  
  keras_model_custom(title = title, operate(self) {
    self$padding <- periodic_padding_2d(pad_width = (kernel_size - 1) / 2)
    self$conv <-
      layer_conv_2d(filters = filters,
                    kernel_size = kernel_size,
                    padding = 'legitimate')
    
    operate (x, masks = NULL) {
      x %>% self$padding() %>% self$conv()
    }
  })
}

For our functions of building a deep-learning baseline that’s quick to coach, CNN structure and parameter defaults arechosen to be easy and average, respectively:

periodic_cnn <- operate(filters = c(64, 64, 64, 64, 2),
                         kernel_size = c(5, 5, 5, 5, 5),
                         dropout = rep(0.2, 5),
                         title = NULL) {
  
  keras_model_custom(title = title, operate(self) {
    
    self$conv1 <-
      periodic_conv_2d(filters = filters[1], kernel_size = kernel_size[1])
    self$act1 <- layer_activation_leaky_relu()
    self$drop1 <- layer_dropout(fee = dropout[1])
    self$conv2 <-
      periodic_conv_2d(filters = filters[2], kernel_size = kernel_size[2])
    self$act2 <- layer_activation_leaky_relu()
    self$drop2 <- layer_dropout(fee =dropout[2])
    self$conv3 <-
      periodic_conv_2d(filters = filters[3], kernel_size = kernel_size[3])
    self$act3 <- layer_activation_leaky_relu()
    self$drop3 <- layer_dropout(fee = dropout[3])
    self$conv4 <-
      periodic_conv_2d(filters = filters[4], kernel_size = kernel_size[4])
    self$act4 <- layer_activation_leaky_relu()
    self$drop4 <- layer_dropout(fee = dropout[4])
    self$conv5 <-
      periodic_conv_2d(filters = filters[5], kernel_size = kernel_size[5])
    
    operate (x, masks = NULL) {
      x %>%
        self$conv1() %>%
        self$act1() %>%
        self$drop1() %>%
        self$conv2() %>%
        self$act2() %>%
        self$drop2() %>%
        self$conv3() %>%
        self$act3() %>%
        self$drop3() %>%
        self$conv4() %>%
        self$act4() %>%
        self$drop4() %>%
        self$conv5()
    }
  })
}

mannequin <- periodic_cnn()

Coaching

In that very same spirit of “default-ness,” we practice with MSE loss and Adam optimizer.

loss <- tf$keras$losses$MeanSquaredError(discount = tf$keras$losses$Discount$SUM)
optimizer <- optimizer_adam()

train_loss <- tf$keras$metrics$Imply(title='train_loss')

valid_loss <- tf$keras$metrics$Imply(title='test_loss')

train_step <- operate(train_batch) {

  with (tf$GradientTape() %as% tape, {
    predictions <- mannequin(train_batch[[1]])
    l <- loss(train_batch[[2]], predictions)
  })

  gradients <- tape$gradient(l, mannequin$trainable_variables)
  optimizer$apply_gradients(purrr::transpose(list(
    gradients, mannequin$trainable_variables
  )))

  train_loss(l)

}

valid_step <- operate(valid_batch) {
  predictions <- mannequin(valid_batch[[1]])
  l <- loss(valid_batch[[2]], predictions)
  
  valid_loss(l)
}

training_loop <- tf_function(autograph(operate(train_ds, valid_ds, epoch) {
  
  for (train_batch in train_ds) {
    train_step(train_batch)
  }
  
  for (valid_batch in valid_ds) {
    valid_step(valid_batch)
  }
  
  tf$print("MSE: practice: ", train_loss$end result(), ", validation: ", valid_loss$end result()) 
    
}))

Depicted graphically, we see that the mannequin trains effectively, however extrapolation doesn’t surpass a sure threshold (which isreached early, after coaching for simply two epochs).

Determine 2: MSE per epoch on coaching and validation units.

This isn’t too stunning although, given the mannequin’s architectural simplicity and modest dimension.

Analysis

Right here, we first current two different baselines, which – given a extremely complicated and chaotic system just like the environment – mightsound irritatingly easy and but, be fairly onerous to beat. The metric used for comparability is latitudinally weightedroot-mean-square error. Latitudinal weighting up-weights the decrease latitudes and down-weights the higher ones.

deg2rad <- operate(d) {
  (d / 180) * pi
}

lats <- tidync("geopotential_500/geopotential_500hPa_2015_5.625deg.nc")$transforms$lat %>%
  select(lat) %>%
  pull()

lat_weights <- cos(deg2rad(lats))
lat_weights <- lat_weights / mean(lat_weights)

weighted_rmse <- operate(forecast, ground_truth) {
  error <- (forecast - ground_truth) ^ 2
  for (i in seq_along(lat_weights)) {
    error[, i, ,] <- error[, i, ,] * lat_weights[i]
  }
  apply(error, 4, imply) %>% sqrt()
}

Baseline 1: Weekly climatology

On the whole, climatology refers to long-term averages computed over outlined time ranges. Right here, we first calculate weeklyaverages primarily based on the coaching set. These averages are then used to forecast the variables in query for the time intervalused as check set.

The 1st step makes use of tidync, ncmeta, RNetCDF and lubridate to compute weekly averages for 2015, following the ISOweek date system.

train_file <- "geopotential_500/geopotential_500hPa_2015_5.625deg.nc"

times_train <- (tidync(train_file) %>% activate("D2") %>% hyper_array())$time

time_unit_train <- ncmeta::nc_atts(train_file, "time") %>%
  tidyr::unnest(cols = c(worth)) %>%
  dplyr::filter(title == "models")

time_parts_train <- RNetCDF::utcal.nc(time_unit_train$worth, times_train)

iso_train <- ISOdate(
  time_parts_train[, "year"],
  time_parts_train[, "month"],
  time_parts_train[, "day"],
  time_parts_train[, "hour"],
  time_parts_train[, "minute"],
  time_parts_train[, "second"]
)

isoweeks_train <- map(iso_train, isoweek) %>% unlist()

train_by_week <- apply(train_all, c(2, 3, 4), operate(x) {
  tapply(x, isoweeks_train, operate(y) {
    mean(y)
  })
})

dim(train_by_week)
53 32 64 2

Step two then runs by means of the check set, mapping dates to corresponding ISO weeks and associating the weekly averages from thecoaching set:

test_file <- "geopotential_500/geopotential_500hPa_2017_5.625deg.nc"

times_test <- (tidync(test_file) %>% activate("D2") %>% hyper_array())$time

time_unit_test <- ncmeta::nc_atts(test_file, "time") %>%
  tidyr::unnest(cols = c(worth)) %>%
  dplyr::filter(title == "models")

time_parts_test <- RNetCDF::utcal.nc(time_unit_test$worth, times_test)

iso_test <- ISOdate(
  time_parts_test[, "year"],
  time_parts_test[, "month"],
  time_parts_test[, "day"],
  time_parts_test[, "hour"],
  time_parts_test[, "minute"],
  time_parts_test[, "second"]
)

isoweeks_test <- map(iso_test, isoweek) %>% unlist()

climatology_forecast <- test_all

for (i in 1:dim(climatology_forecast)[1]) {
  week <- isoweeks_test[i]
  lookup <- train_by_week[week, , , ]
  climatology_forecast[i, , ,] <- lookup
}

For this baseline, the latitudinally-weighted RMSE quantities to roughly 975 for geopotential and 4 for temperature.

wrmse <- weighted_rmse(climatology_forecast, test_all)
round(wrmse, 2)
974.50   4.09

Baseline 2: Persistence forecast

The second baseline generally used makes an easy assumption: Tomorrow’s climate is in the present day’s climate, or, in our case:In three days, issues shall be similar to they’re now.

Computation for this metric is sort of a one-liner. And because it seems, for the given lead time (three days), efficiency isnot too dissimilar from obtained by way of weekly climatology:

persistence_forecast <- test_all[1:(dim(test_all)[1] - lead_time), , ,]

test_period <- test_all[(lead_time + 1):dim(test_all)[1], , ,]

wrmse <- weighted_rmse(persistence_forecast, test_period)

round(wrmse, 2)
937.55  4.31

Baseline 3: Easy convnet

How does the easy deep studying mannequin stack up in opposition to these two?

To reply that query, we first have to acquire predictions on the check set.

test_wrmses <- data.frame()

test_loss <- tf$keras$metrics$Imply(title = 'test_loss')

test_step <- operate(test_batch, batch_index) {
  predictions <- mannequin(test_batch[[1]])
  l <- loss(test_batch[[2]], predictions)
  
  predictions <- predictions %>% as.array()
  predictions[, , , 1] <- predictions[, , , 1] * level_sds[1] + level_means[1]
  predictions[, , , 2] <- predictions[, , , 2] * level_sds[2] + level_means[2]
  
  wrmse <- weighted_rmse(predictions, test_all[batch_index:(batch_index + 31), , ,])
  test_wrmses <<- test_wrmses %>% bind_rows(c(z = wrmse[1], temp = wrmse[2]))

  test_loss(l)
}

test_iterator <- as_iterator(test_ds)

batch_index <- 0
whereas (TRUE) {
  test_batch <- test_iterator %>% iter_next()
  if (is.null(test_batch))
    break
  batch_index <- batch_index + 1
  test_step(test_batch, as.integer(batch_index))
}

test_loss$end result() %>% as.numeric()
3821.016

Thus, common loss on the check set parallels that seen on the validation set. As to latitudinally weighted RMSE, it seemsto be increased for the DL baseline than for the opposite two:

      z    temp 
1521.47    7.70 

Conclusion

At first look, seeing the DL baseline carry out worse than the others may really feel anticlimactic. But when you concentrate on it,there isn’t any have to be disenchanted.

For one, given the big complexity of the duty, these heuristics aren’t as straightforward to outsmart. Take persistence: Relyingon lead time – how far into the long run we’re forecasting – the wisest guess may very well be that all the pieces will keep thesimilar. What would you guess the climate will appear like in 5 minutes? — Identical with weekly climatology: Wanting again at howheat it was, at a given location, that very same week two years in the past, doesn’t on the whole sound like a nasty technique.

Second, the DL baseline proven is as primary as it will possibly get, architecture- in addition to parameter-wise. Extra refined andhighly effective architectures have been developed that not simply by far surpass the baselines, however may even compete with bodilyfashions (cf. particularly Rasp and Thuerey (Rasp and Thuerey 2020) already talked about above). Sadly, fashions like that have to beeducated on rather a lot of knowledge.

Nonetheless, different weather-related functions (apart from medium-range forecasting, that’s) could also be extra in attain forpeople within the subject. For these, we hope we’ve got given a helpful introduction. Thanks for studying!

Rasp, Stephan, Peter D. Dueben, Sebastian Scher, Jonathan A. Weyn, Soukayna Mouatadid, and Nils Thuerey. 2020. “WeatherBench: A benchmark dataset for data-driven climate forecasting.” arXiv e-Prints, February, arXiv:2002.00469. https://arxiv.org/abs/2002.00469.

Rasp, Stephan, and Nils Thuerey. 2020. “Purely Knowledge-Pushed Medium-Vary Climate Forecasting Achieves Comparable Ability to Bodily Fashions at Comparable Decision.” https://arxiv.org/abs/2008.08626.

Weyn, Jonathan A., Dale R. Durran, and Wealthy Caruana. n.d. “Bettering Knowledge-Pushed International Climate Prediction Utilizing Deep Convolutional Neural Networks on a Cubed Sphere.” Journal of Advances in Modeling Earth Techniques n/a (n/a): e2020MS002109. https://doi.org/10.1029/2020MS002109.

Take pleasure in this weblog? Get notified of recent posts by e mail:

Posts additionally accessible at r-bloggers