New opportunity: AstraZeneca to fund Compass PhD project

Novel semi-supervised Bayesian learning to rapidly screen new oligonucleotide drugs for impurities.

This is an exciting opportunity to join Compass’ 4-year programme with integrated training in the statistical and computational techniques of Data Science. You will be part of a dynamic cohort of PhD researchers hosted in the historic Fry Building, which has recently undergone a £35 million refurbishment as the new home for Bristol’s School of Mathematics.

This fully-funded 4 year studentship covers:

  • tuition fees at UK rate
  • tax-free stipend of up £19,609 per year for living expenses and
  • equipment and travel allowance to support research related activities.

This opportunity is open to UK, EU, and international students. 

AstraZeneca is a global, science-led biopharmaceutical business whose innovative medicines are used by millions of patients worldwide. Oligonucleotide-based therapies are advanced novel interventions with the potential to provide a step-change in treatment for many. Nevertheless, as oligonucleotides are large complex molecules they are currently very difficult to profile for impurities, as the analysis is labour intensive and the data complexity is high.

About the Project

The aim of this PhD is to develop Bayesian data science methodology that does this automatically, accurately, and delivers statistical measures of certainty. The challenge is a mathematical one, and no chemistry, biology or pharmacological background is expected of the student. More specifically, we have large batches of mass spectrometry data that will enable us to learn how to characterise the known oglionucleotide signal and deconvolute it from a number of known and unknown impurities longitudinally, in a semi-supervised learning framework. This will allow us to confirm the overall consistency of the profile, identify any change patterns, trends over batches, and any correlation between impurities.

The end goal is to establish a data analytics pipeline and embed it as part of routine analysis in AstraZeneca, so impurities can be monitored more closely and more precisely. The knowledge can then be used to identify possible issues in manufacturing and improve process chemistry by pinpointing impurities associated with different steps of the drug synthesis. This project would also improve the overall understanding of oligonucleotides and therefore, serve as a key step towards establishing an advanced analytical platform.

Project Supervisor

The PhD will be supervised by statistical data scientist Prof Andrew Dowsey at Bristol in collaboration with AstraZeneca. Prof Dowsey’s group has extensive expertise and experience in Bayesian mass spectrometry analytics (e.g. Nature Comms Biology 2019Nature Scientific Reports 2016) and leads the development of the seaMass suite of tools for quantification and statistical analyses in mass spectrometry.

Application Deadline

Application Deadline is 5.00pm Friday 18 June 2021. Please quote ‘Compass/ AstraZeneca’ in the funding section of the application form and in your Personal Statement to ensure your application is reviewed correctly. Please follow the Compass application guidance.

Interviews are expected to be held in the week commencing 12 July.

Student Perspectives: LV Datathon – Insurance Claim Prediction

A post by Doug Corbin, PhD student on the Compass programme.

Introduction

In a recent bout of friendly competition, students from the Compass and Interactive AI CDT’s were divided into eight teams to take part in a two week Datathon, hosted by insurance provider LV. A Datathon is a short, intensive competition, posing a data-driven challenge to the teams. The challenge was to construct the best predictive model for (the size of) insurance claims, using an anonymised, artificial data set generously provided by LV. Each team’s solution was given three important criteria, on which their solutions would be judged:

  • Accuracy – How well the solution performs at predicting insurance claims.
  • Explainability – The ability to understand and explain how the solution calculates its predictions; It is important to be able to explain to a customers how their quote has been calculated.
  • Creativity – The solution’s incorporation of new and unique ideas.

Students were given the opportunity to put their experience in Data Science and Artificial Intelligence to the test on something resembling real life data, forming cross-CDT relationships in the process.

Data and Modelling

Before training a machine learning model, the data must first be processed into a numerical format. To achieve this, most teams transformed categorical features into a series of 0’s and 1’s (representing the value of the category), using a well known process called one-hot encoding. Others recognised that certain features had a natural order to them, and opted to map them to integers corresponding to their ordered position.

A common consideration amongst all the teams’ analysis was feature importance. Out of the many methods/algorithms which were used to evaluate the relative importance of the features, notable mentions are Decision Trees, LASSO optimisation, Permutation Importance and SHAP Values. The specific details of these methods are beyond the scope of this blog post, but they all share a common goal, to rank features according to how important they are in predicting claim amounts. In many of the solutions, feature importance was used to simplify the models by excluding features with little to no predictive power. For others, it was used as a post analysis step to increase explainability i.e. to show which features where most important for a particular claim prediction. As a part of the feature selection process, all teams considered the ethical implications of the data, with many choosing to exclude certain features to mitigate social bias.

Interestingly, almost all teams incorporated some form of Gradient Boosted Decision Trees (GBDT) into their solution, either for feature selection or regression. This involves constructing multiple decision trees, which are aggregated to give the final prediction. A decision tree can be thought of as a sequence of binary questions about the features (e.g. Is the insured vehicle a sports car? Is the car a write off?), which lead to a (constant) prediction depending on the the answers. In the case of GBDT, decision trees are constructed sequentially, each new tree attempting to capture structure in the data which has been overlooked by its predecessors. The final estimate is a weighted sum of the trees, where the weights are optimised using (the gradient of) a specified loss function e.g. Mean-Squared Error (MSE),

MSE = \frac{1}{N} \sum_{n = 1}^N (y_n - \hat{y}_n)^2.

Many of the teams trialled multiple regression models, before ultimately settling on a tree-based model. However, it is well-known that tree-based models are prone to overfitting the training data. Indeed, many of the teams were surprised to see such significant difference between the training/testing Mean Absolute Error (MAE),

MAE = \frac{1}{N} \sum_{n = 1}^N |y_n - \hat{y}_n|.

Results

After two weeks of hard-work, the students came forward to present their solutions to a judging panel formed of LV representatives and CDT directors. The success of each solution was measured via the MAE of their predictions on the testing data set. Anxious to find out the results, the following winners were announced.

Accuracy Winners

Pre-processing: Categorical features one-hot encoded or mapped to integers where appropriate.

Regression Model: Gradient Boosted Decision Trees.

Testing MAE: 69.77

The winning team (in accuracy) was able to dramatically reduce their testing MAE through their choice of loss function. Loss functions quantify how good/bad a regression model is performing during the training process, and it is used to optimise the linear combination of decision trees. While most teams used the popular loss function, Mean-Squared Error, the winning team instead used Least Absolute Deviationswhich is equivalent to optimising for the MAE while training the model.

Explainability (Joint) Winners

After much deliberation amongst the judging panel, two teams were awarded joint first place in the explainability category!

Team 1: 

Pre-processing: Categorical features one-hot encoded or mapped to integers where appropriate. Features centred and scaled to have mean 0 and standard deviation 1, then selected using Gradient Boosted Decision Trees.

Regression Model: K-Nearest-Neighbours Regression

Testing MAE: 75.25

This team used Gradient Boosted Decision Trees for feature selection, combined with K-Nearest-Neighbours (KNN) Regression to model the claim amounts. KNN regression is simple in nature; given a new claim to be predicted, the K “most similar” claims in the training set are averaged (weighted according to similarity) to produce the final prediction. It is thus explainable in the sense that for every prediction you can see exactly which neighbours contributed, and what similarities they shared. The judging panel noted that, from a consumer’s perspective, they may not be satisfied with their insurance quote being based on just K neighbours.

Team 2:

Pre-processing: All categorical features one-hot encoded.

Regression Model: Gradient Boosted Decision Trees. SHAP values used for post-analysis explainability.

Testing MAE: 80.3.

The judging panel was impressed by this team’s decision to impose monotonicity in the claim predictions with respect to the numerical features. This asserts that, for monotonic features, the claim prediction must move in a constant direction (increasing or decreasing) if the numerical feature is moving in a constant direction. For example, a customer’s policy excess is the amount they will have to pay towards a claim made on their insurance. It stands to reason that increasing the policy excess (while other features remain constant) should not increase their insurance quote. If this constraint is satisfied, we say that the insurance quote is monotonic decreasing with respect to the policy excess. Further, SHAP values were used to explain the importance/effect of each feature on the model.

Creativity Winners

Pre-processing: Categorical features one-hot encoded or mapped to integers where appropriate. New feature engineered from Vehicle Size and Vehicle Class. Features selected using Permutation Importance.

Regression Model: Gradient Boosted Decision Trees. Presented post-analysis mediation/moderation study of the features.

Testing MAE: 76.313.

The winning team for creativity presented unique and intriguing methods for understanding and manipulating the data. This team noticed that the features, Vehicle Size and Vehicle Class, are intrinsically related e.g. They investigated whether a large vehicle would likely yield a higher claim if it is also of luxury vehicle class. To represent this relationship, they engineered a new feature by taking a multiplicative combination of the two initial features.

As an extension of their solution, they presented an investigation of the causal relationship between the different features. Several hypothesis tests were performed, testing whether the relationship between certain features and claim amounts is moderated or mediated by an alternative feature in the data set.

  • Mediating relationships: If a feature is mediated by an alternative feature in the data set, its relationship with the claim amounts can be well explained by the alternative (potentially indicating it can be removed from the model).
  • Moderating relationships: If a feature is moderated by an alternative feature in the data set, the strength and/or direction of the relationship with the claim amounts is impacted by the alternative.

Final Thoughts

All the teams showed a great understanding of the problem and identified promising solutions. The competitive atmosphere of the LV Datathon created a notable buzz amongst the students, who were eager to present and compare their findings. As evidenced by every team’s solution, the methodological conclusion is clear: When it comes to insurance claim prediction, tree-based models are unbeaten!

Student perspectives: The Elo Rating System – From Chess to Education

A post by Andrea Becsek, PhD student on the Compass programme.

If you have recently also binge-watched the Queen’s Gambit chances are you have heard of the Elo Rating System. There are actually many games out there that require some way to rank players or even teams. However, the applications of the Elo Rating System reach further than you think.

History and Applications

The Elo Rating System 1 was first suggested as a way to rank chess players, however, it can be used in any competitive two-player game that requires a ranking of its players. The system was first adopted by the World Chess Federation in 1970, and there have been various adjustments to it since, resulting in different implementations by each organisation.

For any soccer-lovers out there, the FIFA world rankings are also based on the Elo System, but if you happen to be into a similar sport, worry not, Elo has you covered. And the list of applications goes on and on: Backgammon, Scrabble, Go, Pokemon, and apparently even Tinder used it at some point.

Fun fact: The formulas used by the Elo Rating make a famous appearance in the Social Network, a movie about the creation of Facebook. Whether this was the actual algorithm used for FaceMash, the first version of Facebook, is however unclear.

All this sounds pretty cool, but how does it actually work?

How it works

We want a way to rank players and update their ranking after each game. Let’s start by assuming that we have the ranking for the two players about to play: \theta_i for player i and \theta_j for player j. Then we can compute the probability of player i winning against player j using the logistic function:

P(Y_{ij}=1)=\frac{1}{1+\exp\{-(\theta_i-\theta_j)\}}.

Given what we know about the logistic function, it’s easy to notice that the smaller the difference between the players, the less certain the outcome as the probability of winning will be close to 0.5.

Once the outcome of the game is known, we can update both players’ abilities

\theta_{i}:=\theta_{i}+K(Y_{ij}-P(Y_{ij}=1))

\theta_{j}:=\theta_{j}+K(P(Y_{ij}=1)-Y_{ij}).

The K factor controls the influence of a player’s performance on their previous ranking. For players with high rankings, a smaller K is used because we expect their abilities to be somewhat stable and hence their ranking shouldn’t be too heavily influenced by every game. On the other hand, players with low ability can learn and improve quite quickly, and therefore their rating should be able to fluctuate more so they have a larger K number.

The term in the brackets represents how different the actual outcome is from the expected outcome of the game. If a player is expected to win but doesn’t, their ranking will decrease, and vice versa. The larger the difference, the more their rating will change. For example, if a weaker player is highly unlikely to win, but they do, their ranking will be boosted quite a bit because it was a hard battle for them. On the other hand, if a strong player is really likely to win because they are playing against a weak player, their increase in score will be small as it was an easy win for them.

Elo Rating and Education

As previously mentioned, the Elo Rating System has been used in a wide range of fields and, as it turns out, that includes education, more specifically, adaptive educational systems 2. Adaptive educational systems are concerned with automatically selecting adequate material for a student depending on their previous performance.

Note that a system can be adaptive at different levels of granularity. Some systems might adapt the homework from week to week by generating it based on the student’s current ability and update their ability once the homework has been completed. Whereas other systems are able to update the student ability after every single question. As you can imagine, using the second system requires a fairly fast, online algorithm. And this is where the Elo Rating comes in.

For an adaptive system to work, we need two key components: student abilities and question difficulties. To apply the Elo Rating to this context, we treat a student’s interaction with a question as a game where the student’s ranking represents their ability and the question’s ranking represents its difficulty. We can then predict whether a student of ability \theta_i will answer a question of difficulty d_j correctly using

P(\text{correct}_{ij}=1)=\frac{1}{1+\exp\{-(\theta_i-d_j)\}}.

and the ability and difficulty can be updated using

\theta_{i}:=\theta_{i}+K(\text{correct}_{ij}-P(\text{correct}_{ij}=1))

d_{j}:=d_{j}+K(P(\text{correct}_{ij}=1)-\text{correct}_{ij}).

So even if you only have 10 minutes to create an adaptive educational system you can easily implement this algorithm. Set all abilities and question difficulties to 0, let students answer your questions, and wait for the magic to happen. If you do have some prior knowledge about the difficulty of the items you could of course incorporate that into the initial values.

One important thing to note is that one should be careful with ranking students based on their abilities as this could result in various ethical issues. The main purpose of obtaining their abilities is to track their progress and match them with questions that are at the right level for them, easy enough to stay motivated, but hard enough to feel challenged.

Conclusion

So is Elo the best option for an adaptive system? It depends. It is fast, enables on the fly updates, it’s easy to implement, and in some contexts, it even has a similar performance to more complex models. However, there are usually many other factors that can be relevant to predicting student performance, such as the time spent on a question or the number of hints they use. This additional data can be incorporated into more complex models, probably resulting in better predictions and offering much more insight. At the end of the day, there is always a trade-off, so depending on the context it’s up to you to decide whether the Elo Rating System is the way to go.

Find out more about Andrea Becsek and her work on her profile page.

  1. Elo, A.E., 1978. The rating of chessplayers, past and present. Arco Pub.
  2. Pelánek, R., 2016. Applications of the Elo rating system in adaptive educational systems. Computers & Education, 98, pp.169-179.

IBM Research is newest Compass partner to sponsor PhD project

The University of Bristol is excited to announce IBM Research Europe as a new partner of Compass – the EPSRC Centre for Doctoral Training in Computational Statistics and Data Science. IBM scientists are collaborating with Prof. Robert Allison and Compass PhD student Anthony Stephenson, on a research project entitled Fast Bayesian Inference at Extreme Scale. The project’s aim is to extend Bayesian inference algorithms to the ‘extreme scales’ that many deep learning workloads occupy, by placing more focus on AI methodologies which furnish both an accurate prediction, and critically, a high-quality uncertainty representation for predictions.

For more than seven decades, IBM Research has defined the future of information technology with more than 3,000 researchers in 19 locations across six continents. Scientists from IBM Research have produced six Nobel Laureates, 10 U.S. National Medals of Technology, five U.S. National Medals of Science, six Turing Awards, 19 inductees in the National Academy of Sciences and 20 inductees into the U.S. National Inventors Hall of Fame

IBM has European research locations in Switzerland (Zurich), England (Hursley and Daresbury), and Ireland (Dublin), with a large development lab in Germany focused on AI, quantum computing, security and hybrid cloud.

IBM’s global labs are involved hundreds of joint projects with universities particularly throughout Europe, in research programs established by the European Union and the local governments, and in cooperation agreements with research institutes of industrial partners.

Compass is a 4-year PhD training programme focusing on Computational Statistics and Data Science. This new venture is part of the Compass mission to promote academic and professional agility in its students, equipping them with the skills and experience to work across disciplines in academia and beyond.

Anthony Stephenson is the PhD student recruited to this project says, “After several years working in industry, I am pleased to be starting the Compass programme and shifting my focus to research. Having the combined forces of the University of Bristol and IBM behind me inspires confidence and I look forward to working with members of each of them. My project, scalable inference in non-linear Bayesian models, is also a highly relevant and exciting area to work on, with many applications in modern machine learning.”

Dr Ed Pyzer-Knapp is World-Wide IBM Research Lead in AI Enriched Modelling and Simulation and says, “I am very excited to work with Anthony and Robert – scaling Bayesian inference is a really important area of machine learning research; bringing to bear our mantra of fusing of bits and neurons to further develop the future of computing. This project is a great opportunity to further strengthen our relationship with the University of Bristol.”

Prof Robert Allison is Anthony’s academic supervisor at the University of Bristol and says, “I’m really looking forward to working with Anthony and Ed on a highly important and widely applicable area of machine learning which encompasses mathematical research, data-analysis, algorithm development and efficient large-scale computation. In addition, I see this project as an ideal opportunity to seed wider ranging data-science and machine learning collaborations between IBM Research, their academic partners and the University of Bristol.”

As Director of Compass, Prof Nick Whiteley say “I’m absolutely delighted to welcome IBM Research to Compass. This project is a fantastic opportunity for Anthony to tackle a very challenging and increasingly important AI research problem under Prof. Allison and Dr. Pyzer-Knapp’s supervision. As this collaboration develops, I look forward to all Compass students learning about IBM’s vision for the future of AI and its connection to the expertise in statistical methodology and computing they will acquire through the Compass training programme.”

Skip to toolbar