A post by Sam Stockman, PhD student on the Compass programme.
Throughout my PhD I aim to bridge a gap between advances made in the machine learning community and the age-old problem of earthquake forecasting. In this inter-departmental work with Max Werner from the school of Earth Sciences and Dan Lawson from the school of Maths, I hope to create more powerful, efficient and robust models for forecasting, that can make earthquake prone areas safer for their inhabitants.
For years Seismologists have sought to model the structure and dynamics of the earth in order to make predictions about earthquakes. They have mapped out the structure of fault lines and conducted experiments in the lab where they submit rock to great amounts of force in order to simulate plate tectonics on a small scale. Yet when trying to forecast earthquakes on a short time scale (that’s hours and days, not tens of years), these models based on the knowledge of the underlying physics are regularly outperformed by models that are statistically motivated. In statistical seismology we seek to make predictions through looking at distributions of the times, locations and magnitudes of earthquakes and use them to forecast the future.
The general focus of my PhD research is in some sense to produce models with the following characteristics:
Well-calibrated (uncertainty estimates from the predictive process reflect the true variance of the target values)
Scalable (i.e. we can run it on large datasets)
At a vague high-level, we can consider that we can have two out of three of those requirements without too much difficulty, but including the third causes trouble. For example, Bayesian linear models would satisfy good-calibration and scalability but (as the name suggests) fail at modelling non-linear functions. Similarly, neural-networks are famously good at modelling non-linear functions and much work has been spent on improving their efficiency and scalability, but producing well-calibrated predictions is a complex additional feature. I am approaching the problem from the angle of Gaussian Processes, which provide well-calibrated non-linear models; at the expense of scalability.
Gaussian Processes (GPs)
See Conor’s blog post for a more detailed introduction to GPs; here I will provide a basic summary of the key facts we need for the rest of the post.
The functional view of GPs is that we define a distribution over functions:
where and are the mean function and kernel function respectively, which play analogous roles to the usual mean and covariance of a Gaussian distribution.
In practice, we only ever observe some finite collection of points, corrupted by noise, which we can hence view as a draw from some multivariate normal distribution:
(Here subscript denotes dimensionality of the vector or matrix).
When we use GPs to generate predictions at some new test point we use the following equations which I will not derive here (See ) for the predicted mean and variance respectively:
The key point here is that both predictive functions involve the inversion of an matrix at a cost of .
A post by Jack Simons, PhD student on the Compass programme.
I began my PhD with my supervisors, Dr Song Liu and Professor Mark Beaumont with the intention of combining their respective fields of research; Density Ratio Estimation (DRE), and Simulation Based Inference (SBI):
DRE is a rapidly growing paradigm in machine learning which (broadly) provides efficient methods of comparing densities without the need to compute each density individually. For a comprehensive yet accessible overview of DRE in Machine Learning see .
SBI is a group of methods which seek to solve Bayesian inference problems when the likelihood function is intractable. If you wish for a concise overview of the current work, as well as motivation then I recommend .
Last year we released a paper, Variational Likelihood-Free Gradient Descent  which combined these fields. This blog post seeks to condense, and make more accessible, the contents of the paper.
Motivation: Likelihood-Free Inference
Let’s begin by introducing likelihood-free inference. We wish to do inference on the posterior distribution of parameters for a specific observation , i.e. we wish to infer which can be decomposed via Bayes’ rule as
The likelihood-free setting is that, additional to the usual intractability of the normalising constant in the denominator, the likelihood, , is also intractable. In lieu of this, we require an implicit likelihood which describes the relation between data and parameters in the form of a forward model/simulator (hence simulation based inference!). (more…)
Manyreal-worldoptimisationproblemsinvolverepeatedratherthanone-offdecisions.A decisionmaker(who we refer to as an agent)isrequiredtorepeatedlyperformactionsfroma setofavailableoptions.Aftertakinganaction,theagentwillreceivearewardbasedon the action performed.The agent can then use this feedback to inform later decisions.Some examples of such problems are:
Choosing advertisements to display on a website each time a page is loaded to maximise click-through rate.
Calibrating the temperature to maximise the yield from a chemical reaction.
Distributing a budget between university departments to maximise research output.
Choosing the best route to commute to work.
In each case there is a fundamental trade-off between exploitation and exploration.On the one hand, the agent should act in ways which exploit the knowledge they have accumulated to promote their short term reward, whether that’s the yield of a chemical process or click-throughrateonadvertisements.Ontheotherhand,theagentshouldexplorenewactions inordertoincreasetheirunderstandingoftheirenvironmentinwayswhichmaytranslate into future rewards.(more…)
My work focuses on addressing the growing need for reliable, day-ahead energy demand forecasts in smart grids. In particular, we have been developing structured ensemble models for probabilistic forecasting that are able to incorporate information from a number of sources. I have undertaken this EDF-sponsored project with the help of my supervisors Matteo Fasiolo (UoB) and Yannig Goude (EDF) and in collaboration Christian Capezza (University of Naples Federico II).
One of the largest challenges society faces is climate change. Decarbonisation will lead to both a considerable increase in demand for electricity and a change in the way it is produced. Reliable demand forecasts will play a key role in enabling this transition. Historically, electricity has been produced by large, centralised power plants. This allows production to be relatively easily tailored to demand with little need for large-scale storage infrastructure. However, renewable methods are typically decentralised, less flexible and supply is subject to weather conditions or other unpredictable factors. A consequence of this is that electricity production will less able to react to sudden changes in demand, instead it will need to be generated in advance and stored. To limit the need for large-scale and expensive electricity storage and transportation infrastructure, smart grid management systems can instead be employed. This will involve, for example, smaller, more localised energy storage options. This increases the reliance on accurate demand forecasts to inform storage management decisions, not only at the aggregate level, but possibly down at the individual household level. The recent impact of the Covid-19 pandemic also highlighted problems in current forecasting methods which struggled to cope with the sudden change in demand patterns. These issues call attention to the need to develop a framework for more flexible energy forecasting models that are accurate at the household level. At this level, demand is characterised by a low signal-to-noise ratio, with frequent abrupt changepoints in demand dynamics. This can be seen in Figure 1 below.
The challenges posed by forecasting at a low level of aggregation motivate the use of an ensemble approach that can incorporates information from several models and across households. In particular, we propose an additive stacking structure where we can borrow information across households by constructing a weighted combination of experts, which is generally referred to as stacking regressions .
A post by Conor Crilly, PhD student on the Compass programme.
This project investigates uncertainty quantification methods for expensive computer experiments.It is supervised by Oliver Johnson of the University of Bristol, and is partially funded by AWE.
Physical systems and experiments are commonly represented, albeit approximately, using mathematical models implemented via computer code.This code, referred to as a simulator, often cannot be expressed in closed form, and is treated as a ‘black-box’.Such simulators arise in a range of application domains, for example engineering, climate science and medicine.Ultimately, we are interested in using simulators to aid some decision making process.However, for decisions made using the simulator to be credible, it is necessary to understand and quantify different sources of uncertainty induced by using the simulator. Running the simulator for a range of input combinations is what we call a computer experiment .As the simulators of interest are expensive, the available data is usually scarce.Emulation is the process of using a statistical model (an emulator) to approximate our computer code and provide an estimate of the associated uncertainty.
Intuitively, an emulator must possess two fundamental properties
It must be cheap, relative to the code
It must provide an estimate of the uncertainty in its output
A common choice of emulator is the Gaussian process emulator, which is discussed extensively in  and described in the next section.
Types of Uncertainty
There are many types of uncertainty associated with the use of simulators including input, model and observational uncertainty.One type of uncertainty induced by using anexpensivesimulator is code uncertainty, described by Kennedy and O’Hagan in their seminal paper on calibration .To paraphrase Kennedy and O’Hagan:In principle the simulator encodes a relationship between a set of inputs and a set of outputs, which we could evaluate for any given combination of inputs.However, in practice, it is not feasible to run the simulator for every combination, so acknowledging the uncertainty in the code output is required.(more…)
A post by Ed Davis, PhD student on the Compass programme.
Today is a great day to be a data scientist. More than ever, our ability to both collect and analyse data allow us to solve larger, more interesting, and more diverse problems. My research focuses on analysing networks, which cover a mind-boggling range of applications from modelling vast computer networks , to detecting schizophrenia in brain networks . In this blog, I want to share some of the cool research I have been a part of since joining the COMPASS CDT, which has to do with the analysis of dynamic networks.
A network can be defined as an ordered pair, , where is a node (or vertex) set and is an edge set. From this definition, we can represent any node network in terms of an adjacency matrix, , where for nodes ,
When we model networks, we can assume that there are some unobservable weightings which mean that certain nodes have a higher connection probability than others. We then observe these in the adjacency matrix with some added noise (like an image that has been blurred). Under this assumption, there must exist some unobservable noise-free version of the adjacency matrix (i.e. the image) that we call the probability matrix, . Mathematically, we represent this by saying
where we have chosen a Bernoulli distribution as it will return either a 1 or a 0. As the connection probabilities are not uniform across the network (inhomogeneous) and the adjacency is sampled from some probability matrix (random), we say that is an inhomogeneous random graph.
Going a step further, we can model each node as having a latent position, which can be used to generate its connection probabilities and, hence define its behaviour. Using this, we can define node communities; a group of nodes that have the same underlying connection probabilities, meaning they have the same latent positions. We call this kind of model a latent position model. For example, in a network of social interactions at a school, we expect that pupils are more likely to interact with other pupils in their class. In this case, pupils in the same class are said to have similar latent positions and are part of a node community. Mathematically, we say there is a latent position assigned to each node, and then our probability matrix will be the gram matrix of some kernel, . From this, we generate our adjacency matrix as
Under this model, our goal is then to estimate the latent positions by analysing .
A post by Annie Gray, PhD student on the Compass programme.
Initially, my Compass mini-project aimed to explore what we can discover about objects given a matrix of similarities between them. More specifically, how to appropriately measure the distance between objects if we represent each as a point in (the embedding), and what this can tell us about the objects themselves. This led to discovering that the geodesic distances in the embedding relate to the Euclidean distance between the original positions of the objects, meaning we can recover the original positions of the objects. This work has applications in fields that work with relational data for example: genetics, Natural Language Processing and cyber-security.
This work resulted in a paper  written with my supervisors (Nick Whiteley and Patrick Rubin-Delanchy), which has been accepted at NeurIPS this year. The following gives an overview of the paper and how the ideas can be used in practice.
Probability theory is a branch of mathematics centred around the abstract manipulation and quantification of uncertainty and variability. It forms a basic unit of the theory and practice of statistics, enabling us to tame the complex nature of observable phenomena into meaningful information. It is through this reliance that the debate over the true (or more correct) underlying nature of probability theory has profound effects on how statisticians do their work. The current opposing sides of the debate in question are the Frequentists and the Bayesians. Frequentists believe that probability is intrinsically linked to the numeric regularity with which events occur, i.e. their frequency. Bayesians, however, believe that probability is an expression of someones degree of belief or confidence in a certain claim. In everyday parlance we use both of these concepts interchangeably: I estimate one in five of people have Covid; I was 50% confident that the football was coming home. It should be noted that the latter of the two is not a repeatable event per se. We cannot roll back time to check what the repeatable sequence would result in.
Imagine that you are employed by Chicago’s city council, and are tasked with estimating where the mean locations of reported crimes are in the city. The data that you are given only goes up to the city’s borders, even though crime does not suddenly stop beyond this artificial boundary. As a data scientist, how would you estimate these centres within the city? Your measurements are obscured past a very complex border, so regular methods such as maximum likelihood would not be appropriate.
This is an example of a more general problem in statistics named truncated probability density estimation. How do we estimate the parameters of a statistical model when data are not fully observed, and are cut off by some artificial boundary? (more…)