Access to Data Science Event: start your PhD journey with Compass

Want to find out what a modern PhD in Statistics and Data Science is like?

 

Application deadline: Friday 15 July 2022, 23:59.  

APPLY NOW

The EPSRC Centre for Doctoral Training in Computational Statistics and Data Science (Compass) offers a PhD training programme in the statistical and computational techniques of data science.  As part of our recruitment for 2023/24 academic year we are holding an open event to introduce potential applicants to the CDT.

Access to Data Science will run on Monday 12 September 2022 and will provide an immersive experience for prospective PhD students. This fully-funded event will be hosted by Compass academic staff and PhD students in the Fry Building, home to the School of Mathematics. The date of this event will enable participants to also attend the Compass Annual Conference held in the School of Mathematics on Tuesday 13 September 2022.

As part of the Access to Data Science event, you’ll be invited to:

  • learn about the Compass CDT programme
  • discover the details of PhD projects on offer for 2023/24
  • take part in a hands on data workshop
  • have exclusive access to an application writing workshop
  • interact with the current Compass students and find out more about their experience with the programme
  • attend the Compass Annual Conference to hear first hand from our students about their research projects

The purpose of this event is to increase all aspects of diversity amongst Data Science researchers. We particularly encourage applications from women and members of the LGBTQ+ and black, Asian and minority ethnic communities, to join us.  

We welcome participants from a range of numerate academic backgrounds, with undergraduate degrees in subjects such as computer science, economics, epidemiology, mathematics, statistics and physics. 

Application deadline: Friday 15 July 2022, 23:59.  

APPLY NOW

Application process 

We welcome applications from across the UK. 

Access to Data Science participants will be offered hotel accommodation, reimbursement of travel costs within the UK, and meals for each day of the event. 

Your Access to Data Science application should include a CV and a short personal statement (no more than 500 words) explaining:

  • your motivation for applying to Access to Data Science and how it relates to your future plans
  • why you should be offered a place on the programme, highlighting elements of your academic record which align with the theme of the programme, such as project work or courses you have taken
  • any other relevant skills or experience

We expect applicants to be on track to meet the entry requirements for the Compass PhD programme. If you have any questions about this event and whether it’s right for you, please contact compass-cdt@bristol.ac.uk.

Application deadline: Friday 15 July 2022, 23:59.  

APPLY NOW

Student perspectives: Neural Point Processes for Statistical Seismology

A post by Sam Stockman, PhD student on the Compass programme.

Introduction

Throughout my PhD I aim to bridge a gap between advances made in the machine learning community and the age-old problem of earthquake forecasting. In this inter-departmental work with Max Werner from the school of Earth Sciences and Dan Lawson from the school of Maths, I hope to create more powerful, efficient and robust models for forecasting, that can make earthquake prone areas safer for their inhabitants.

For years Seismologists have sought to model the structure and dynamics of the earth in order to make predictions about earthquakes. They have mapped out the structure of fault lines and conducted experiments in the lab where they submit rock to great amounts of force in order to simulate plate tectonics on a small scale. Yet when trying to forecast earthquakes on a short time scale (that’s hours and days, not tens of years), these models based on the knowledge of the underlying physics are regularly outperformed by models that are statistically motivated. In statistical seismology we seek to make predictions through looking at distributions of the times, locations and magnitudes of earthquakes and use them to forecast the future.

 

 

(more…)

Ed Davis wins poster competition

Congratulations to Ed Davis who won a poster award as part of the Jean Golding Institute’s Beauty of Data competition.

This visualisation, entitled “The World Stage”, gives a new way of representing the positions of countries. Instead of placing them based on their geographical position, they have been placed based on their geopolitical alliances. Countries have been placed such to minimise the distance to their allies and maximise the distance to their non-allies based on 40 different alliances involving 161 countries. This representation was achieved by embedding the alliance network using Node2Vec, followed by principal component analysis (PCA) to reduce it to 2D.

Student perspectives: Sampling from Gaussian Processes

A post by Anthony Stephenson, PhD student on the Compass programme.

Introduction

The general focus of my PhD research is in some sense to produce models with the following characteristics:

  1. Well-calibrated (uncertainty estimates from the predictive process reflect the true variance of the target values)
  2. Non-linear
  3. Scalable (i.e. we can run it on large datasets)

At a vague high-level, we can consider that we can have two out of three of those requirements without too much difficulty, but including the third causes trouble. For example, Bayesian linear models would satisfy good-calibration and scalability but (as the name suggests) fail at modelling non-linear functions. Similarly, neural-networks are famously good at modelling non-linear functions and much work has been spent on improving their efficiency and scalability, but producing well-calibrated predictions is a complex additional feature. I am approaching the problem from the angle of Gaussian Processes, which provide well-calibrated non-linear models; at the expense of scalability.

Gaussian Processes (GPs)

See Conor’s blog post for a more detailed introduction to GPs; here I will provide a basic summary of the key facts we need for the rest of the post.

The functional view of GPs is that we define a distribution over functions:

f(\cdot) \sim \mathcal{GP}(m(\cdot),k(\cdot, x))

where m and k are the mean function and kernel function respectively, which play analogous roles to the usual mean and covariance of a Gaussian distribution.

In practice, we only ever observe some finite collection of points, corrupted by noise, which we can hence view as a draw from some multivariate normal distribution:

y_n \sim \mathcal{N}(0, \underbrace{K_n + \sigma^2I_n}_{K_\epsilon})

where

y_n = f_n(x) + \epsilon_n with \epsilon \sim \mathcal{N}(0,\sigma^2).

(Here subscript n denotes dimensionality of the vector or matrix).

When we use GPs to generate predictions at some new test point x_\star \notin X we use the following equations which I will not derive here (See [1]) for the predicted mean and variance respectively:

\mu(x_\star) = k(x_\star,X)K_\epsilon^{-1}y_n

V(x_\star) = k(x_\star, x_\star) - k(x_\star, X)K_\epsilon^{-1}k(X,x_\star)

The key point here is that both predictive functions involve the inversion of an n\times n matrix at a cost of \mathcal{O}(n^3).

Motivation

(more…)

Skip to toolbar