This blog post provides a simple introduction to Deep Kernel Machines (DKMs), a novel supervised learning method that combines the advantages of both deep learning and kernel methods. This work provides the foundation of my current research on convolutional DKMs, which is supervised by Dr Laurence Aitchison.
Why aren’t kernels cool anymore?
Kernel methods were once top-dog in machine learning due to their ability to implicitly map data to complicated feature spaces, where the problem usually becomes simpler, without ever explicitly computing the transformation. However, in the past decade deep learning has become the new king for complicated tasks like computer vision and natural language processing.
Neural networks are flexible when learning representations
The reason is twofold: First, neural networks have millions of tunable parameters that allow them to learn their feature mappings automatically from the data, which is crucial for domains like images which are too complex for us to specify good, useful features by hand. Second, their layer-wise structure means these mappings can be built up to increasingly more abstract representations, while each layer itself is relatively simple. For example, trying to learn a single function that takes in pixels from pictures of animals and outputs their species is difficult; it is easier to map pixels to corners and edges, then shapes, then body parts, and so on.
Kernel methods are rigid when learning representations
It is therefore notable that classical kernel methods lack these characteristics: most kernels have a very small number of tunable hyperparameters, meaning their mappings cannot flexibly adapt to the task at hand, leaving us stuck with a feature space that, while complex, might be ill-suited to our problem. (more…)
A post by Dan Milner, PhD student on the Compass programme.
This blog describes an approach being developed to deliver rapid classification of farmer strategies. The data comes from a survey conducted with two groups of smallholder farmers (see image 2), one group living in the Taita Hills area of southern Kenya and the other in Yebelo, southern Ethiopia. This work would not have been possible without the support of my supervisors James Hammond, from the International Livestock Research Institute (ILRI) (and developer of the Rural Household Multi Indicator Survey, RHoMIS, used in this research), as well as Andrew Dowsey, Levi Wolf and Kate Robson Brown from the University of Bristol.
Aims of the project
The goal of my PhD is to contribute a landscape approach to analysing agricultural systems. On-farm practices are an important part of an agricultural system and are one of the trilogy of components that make-up what Rizzo et al (2022) call ‘agricultural landscape dynamics’ – the other two components being Natural Resources and Landscape Patterns. To understand how a farm interacts with and responds to Natural Resources and Landscape Patterns it seems sensible to try and understand not just each farms inputs and outputs but its overall strategy and component practices. (more…)
My area of research is studying Quantile Generalised Additive Models (QGAMs), with my main application lying in energy demand forecasting. In particular, my focus is on developing faster and more stable fitting methods and model selection techniques. This blog post aims to briefly explain what QGAMs are, how to fit them, and a short illustrative example applying these techniques to data on extreme rainfall in Switzerland. I am supervised by Matteo Fasiolo and my research is sponsored by Électricité de France (EDF).
Quantile Generalised Additive Models
QGAMs are essentially the result of combining quantile regression (QR; performing regression on a specific quantile of the response) with a generalised additive model (GAM; fitting a model assuming additive smooth effects). Here we are in the regression setting, so let be the conditional c.d.f. of a response, , given a -dimensional vector of covariates, . In QR we model the th quantile, that is, .
This might be useful in cases where we do not need to model the full distribution of and only need one particular quantile of interest (for example urban planners might only be interested in estimates of extreme rainfall e.g. ). It also allows us to make no assumptions about the underlying true distribution, instead we can model the distribution empirically using multiple quantiles.
We can define the th quantile as the minimiser of expected loss
w.r.t. , where
is known as the pinball loss (Koenker, 2005).
We can approximate the above expression empirically given a sample of size , which gives the quantile estimator, where
where is the th vector of covariates, and is vector of regression coefficients.
So far we have described QR, so to turn this into a QGAM we assume has additive structure, that is, we can write the th conditional quantile as
where the additive terms are defined in terms of basis functions (e.g. spline bases). A marginal smooth effect could be, for example
where are unknown coefficients, are known spline basis functions and is the basis dimension.
Denote the vector of basis functions evaluated at , then the design matrix is defined as having th row , for , and is the total basis dimension over all . Now the quantile estimate is defined as . When estimating the regression coefficients, we put a ridge penalty on to control complexity of , thus we seek to minimise the penalised pinball loss
where is a vector of positive smoothing parameters, is the learning rate and the ‘s are positive semi-definite matrices which penalise the wiggliness of the corresponding effect . Minimising with respect to given fixed and leads to the maximum a posteriori (MAP) estimator .
There are a number of methods to tune the smoothing parameters and learning rate. The framework from Fasiolo et al. (2021) consists in:
calibrating by Integrated Kullback–Leibler minimisation
selecting by Laplace Approximate Marginal Loss minimisation
estimating by minimising penalised Extended Log-F loss (note that this loss is simply a smoothed version of the pinball loss introduced above)
For more detail on what each of these steps means I refer the reader to Fasiolo et al. (2021). Clearly this three-layered nested optimisation can take a long time to converge, especially in cases where we have large datasets which is often the case for energy demand forecasting. So my project approach is to adapt this framework in order to make it less computationally expensive.
Application to Swiss Extreme Rainfall
Here I will briefly discuss one potential application of QGAMs, where we analyse a dataset consisting of observations of the most extreme 12 hourly total rainfall each year for 65 Swiss weather stations between 1981-2015. This data set can be found in the R package gamair and for model fitting I used the package mgcViz.
A basic QGAM for the 50% quantile (i.e. ) can be fitted using the following formula
where is the intercept term, is a parametric factor for climate region, are smooth effects, is the Annual North Atlantic Oscillation index, is the metres above sea level, is the year of observation, and and are the degrees east and north respectively.
After fitting in mgcViz, we can plot the smooth effects and see how these affect the extreme yearly rainfall in Switzerland.
From the plots observe the following; as we increase the NAO index we observe a somewhat oscillatory effect on extreme rainfall; when increasing elevation we see a steady increase in extreme rainfall before a sharp drop after an elevation of around 2500 metres; as years increase we see a relatively flat effect on extreme rainfall indicating the extreme rainfall patterns might not change much over time (hopefully the reader won’t regard this as evidence against climate change); and from the spatial plot we see that the south-east of Switzerland appears to be more prone to more heavy extreme rainfall.
We could also look into fitting a 3D spatio-temporal tensor product effect, using the following formula
where is the tensor product effect between , and . We can examine the spatial effect on extreme rainfall over time by plotting the smooths.
There does not seem to be a significant interaction between the location and year, since we see little change between the plots, except for perhaps a slight decrease in the south-east.
Finally, we can make the most of the QGAM framework by fitting multiple quantiles at once. Here we fit the first formula for quantiles , and we can examine the fitted smooths for each quantile on the spatial effect.
Interestingly the spatial effect is much stronger in higher quantiles than in the lower ones, where we see a relatively weak effect at the 0.1 quantile, and a very strong effect at the 0.9 quantile ranging between around -30 and +60.
The example discussed here is but one of many potential applications of QGAMs. As mentioned in the introduction, my research area is motivated by energy demand forecasting. My current/future research is focused on adapting the QGAM fitting framework to obtain faster fitting.
Fasiolo, M., S. N. Wood, M. Zaffran, R. Nedellec, and Y. Goude (2021). Fast calibrated additive quantile regression.Journal of the American Statistical Association 116(535), 1402–1412.
Koenker, R. (2005).Quantile Regression. Cambridge University Press.
My PhD focuses on the application of statistical methods to volcanic hazard forecasting. This research is jointly supervised by Professor Jeremy Philips (School of Earth Sciences) and Professor Anthony Lee. (more…)
A post by Dan Ward, PhD student on the Compass programme.
Normalising flows are black-box approximators of continuous probability distributions, that can facilitate both efficient density evaluation and sampling. They function by learning a bijective transformation that maps between a complex target distribution and a simple distribution with matching dimension, such as a standard multivariate Gaussian distribution. (more…)
A post by Sam Stockman, PhD student on the Compass programme.
Throughout my PhD I aim to bridge a gap between advances made in the machine learning community and the age-old problem of earthquake forecasting. In this cross-disciplinary work with Max Werner from the School of Earth Sciences and Dan Lawson from the School of Mathematics, I hope to create more powerful, efficient and robust models for forecasting, that can make earthquake prone areas safer for their inhabitants.
For years seismologists have sought to model the structure and dynamics of the earth in order to make predictions about earthquakes. They have mapped out the structure of fault lines and conducted experiments in the lab where they submit rock to great amounts of force in order to simulate plate tectonics on a small scale. Yet when trying to forecast earthquakes on a short time scale (that’s hours and days, not tens of years), these models based on the knowledge of the underlying physics are regularly outperformed by models that are statistically motivated. In statistical seismology we seek to make predictions through looking at distributions of the times, locations and magnitudes of earthquakes and use them to forecast the future.
A post by Jack Simons, PhD student on the Compass programme.
I began my PhD with my supervisors, Dr Song Liu and Professor Mark Beaumont with the intention of combining their respective fields of research; Density Ratio Estimation (DRE), and Simulation Based Inference (SBI):
DRE is a rapidly growing paradigm in machine learning which (broadly) provides efficient methods of comparing densities without the need to compute each density individually. For a comprehensive yet accessible overview of DRE in Machine Learning see .
SBI is a group of methods which seek to solve Bayesian inference problems when the likelihood function is intractable. If you wish for a concise overview of the current work, as well as motivation then I recommend .
Last year we released a paper, Variational Likelihood-Free Gradient Descent  which combined these fields. This blog post seeks to condense, and make more accessible, the contents of the paper.
Motivation: Likelihood-Free Inference
Let’s begin by introducing likelihood-free inference. We wish to do inference on the posterior distribution of parameters for a specific observation , i.e. we wish to infer which can be decomposed via Bayes’ rule as
The likelihood-free setting is that, additional to the usual intractability of the normalising constant in the denominator, the likelihood, , is also intractable. In lieu of this, we require an implicit likelihood which describes the relation between data and parameters in the form of a forward model/simulator (hence simulation based inference!). (more…)
Probability theory is a branch of mathematics centred around the abstract manipulation and quantification of uncertainty and variability. It forms a basic unit of the theory and practice of statistics, enabling us to tame the complex nature of observable phenomena into meaningful information. It is through this reliance that the debate over the true (or more correct) underlying nature of probability theory has profound effects on how statisticians do their work. The current opposing sides of the debate in question are the Frequentists and the Bayesians. Frequentists believe that probability is intrinsically linked to the numeric regularity with which events occur, i.e. their frequency. Bayesians, however, believe that probability is an expression of someones degree of belief or confidence in a certain claim. In everyday parlance we use both of these concepts interchangeably: I estimate one in five of people have Covid; I was 50% confident that the football was coming home. It should be noted that the latter of the two is not a repeatable event per se. We cannot roll back time to check what the repeatable sequence would result in.