## Launch of industry-focused seminar series DataScience@Work

Compass is excited to announce the launch of the DataScience@work seminar series. This new seminar series invites speakers from external organisations to talk about their experiences as Data Scientists in industry, government and the third sector. The dual meaning of DataScience@work focuses talks on both the technical side of the speakers’ roles as well as working as part of a wider organisation, and building a career in data science.

Highlighting the importance of the new seminar series, Prof Nick Whiteley (Compass Director) says…

Compass aims to develop scientific and professionally agility in its students. Our goal is to connect technical expertise in data science with experience of thinking, communicating and collaborating across disciplines and across sectors. In our new DataScience@Work seminar series, Compass partners from industry will share insights into the key role of Data Science within their organisations, their objectives and future outlook. This is a great opportunity for our students to learn about career trajectories beyond academia, helping shape their aspirations and personal goals for life beyond the PhD. I’m especially grateful to Adarga, CheckRisk, IBM Research, Improbable, and Shell for leading this first season of DataScience@Work and for their ongoing support for Compass.

For further information on the seminar series, including invited speakers to the 2020/21 session, see the DataScience@work page.

## Compass Special Lecture: Jonty Rougier

Compass is excited to announce that Jonty Rougier (2021 recipient of the Barnett Award) will be delivering a Compass Special Lecture.

Jonty’s experience lies in Computer Experiments, computational statistics and Machine Learning, uncertainty and risk assessment, and decision support. In 2021, he was awarded Barnett Award by the RSS, which is made to those internationally recognised for contributions in the field of environmental statistics, risk and uncertainty quantification. Rougier has also advised several UK Government departments and agencies, including a secondment to the Cabinet Office in 2016/17 to contribute to the UK National Risk Assessment.

13th April 2021
09:30 - 14:00

## Student perspectives: Wessex Water Industry Focus Lab

A post by Michael Whitehouse, PhD student on the Compass programme.

Introduction

September saw the first of an exciting new series of Compass industry focus labs; with this came the chance to make use of the extensive skill sets acquired throughout the course and an opportunity to provide solutions to pressing issues of modern industry. The partner for the first focus lab, Wessex Water, posed the following question: given time series data on water flow levels in pipes, can we detect if new leaks have occurred? Given the inherent value of clean water available at the point of use and the detriments of leaking this vital resource, the challenge of ensuring an efficient system of delivery is of great importance. Hence, finding an answer to this question has the potential to provide huge economic, political, and environmental benefits for a large base of service users.

Data and Modelling:

The dataset provided by Wessex Water consisted of water flow data spanning across around 760 pipes. After this data was cleaned and processed some useful series, such as minimum nightly and average daily flow (MNF and ADF resp.), were extracted. Preliminary analysis carried out by our collaborators at Wessex Water concluded that certain types of changes in the structure of water flow data provide good indications that a leak has occurred. From this one can postulate that detecting a leak amounts to detecting these structural changes in this data. Using this principle, we began to build a framework to build solutions: detect the change; detect a new leak. Change point detection is a well-researched discipline that provides us with efficient methods for detecting statistically significant changes in the distribution of a time series and hence a toolbox with which to tackle the problem. Indeed, we at Compass have our very own active member of the change point detection research community in the shape of Dom Owens. The preliminary analysis gave that there are three types of structural change in water flow series that indicate a leak: a change in the mean of the MNF, a change in trend of the MNF, and a change in the variance of the difference between the MNF and ADF. In order to detect these changes with an algorithm we would need to transform the given series so that the original change in distribution corresponded to a change in the mean of the transformed series. These transforms included calculating generalised additive model (GAM) residuals and analysing their distribution. An example of such a GAM is given by:

$\mathbb{E}[\text{flow}_t] = \beta_0 \sum_{i=1}^m f_i(x_i).$

Where the x i ’s are features we want to use to predict the flow, such as the time of day or current season. The principle behind this analysis is that any change in the residual distribution corresponds to a violation of the assumption that residuals are independently, identically distributed and hence, in turn, corresponds to a deviation from the original structure we fit our GAM to.

Figure 1: GAM residual plot. Red lines correspond to detected changes in distribution, green lines indicate a repair took place.

A Change Point Detection Algorithm:

In order to detect changes in real time we would need an online change point detection algorithm, after evaluating the existing literature we elected to follow the mean change detection procedure described in [Wang and Samworth, 2016]. The user-end procedure is as follows:

1. Calculate mean estimate $\hat{\mu}$ on some data we assume is stationary.
2. Feed a new observation into the algorithm. Calculate test statistics based on new data.
3. Repeat (2) until any test statistics exceed a threshold at which point we conclude a mean change has been detected. Return to (1).

Due to our 2 week time restraint we chose to restrict ourselves to finding change points corresponding to a mean change, just one of the 3 changes we know are indicative of a leak. As per the fundamental principles of decision theory, we would like to tune and evaluate our algorithm by minimising some loss function which depends on some ‘training’ data. That is, we would like to look at some past period of time and make predictions of when leaks happened given the flow data across the same period, then we evaluate how accurate these predictions were and adjust or asses the model accordingly. However, to do this we would need to know when and where leaks actually occurred across the time period of the data, something we did not have access to. Without ‘labels’ indicating that a leak has occurred, any predictions from the model were essentially meaningless, so we sought to find a proxy. The one potentially useful dataset we did have access to was that of leak repairs. It is clear that a leak must have occurred if a repair has occurred, but for various reasons this proxy does not provide an exhaustive account of all leaks. Furthermore, we do not know which repairs correspond to leaks identified by the particular distributional change in flow data we considered. This, in turn, means that all measures of model performance must come with the caveat that they are contingent on incomplete data. If when conducting research we find out results are limited it is our duty as statisticians to report when this is the case – it is not our job to sugar coat or manipulate our findings, but to report them with the limitations and uncertainty that inextricably come alongside. Results without uncertainty are as meaningless as no results at all. This being said, all indications pointed towards the method being effective in detecting water flow data mean change points which correspond to leak repairs, giving a positive result to feedback to our friends at Wessex Water.

Final Conclusions:

Communicating statistical concepts and results to an audience of varied areas and levels of expertise is important now more than ever. The continually strengthening relationships between Compass and its industrial partners are providing students with the opportunity to gain experience in doing exactly this. The focus lab concluded with a presentation on our findings to the Wessex Water operations team,  during which we reported the procedures and results. The technical results were well supported by the demonstration of an R shiny dashboard app, which provided an intuitive interface to view the output of the developed algorithm. Of course, there is more work to be done. Expanding the algorithm to account for all 3 types of distributional change is the obvious next step. Furthermore, fitting a GAM to data for 760 pipes is not very efficient. Additional investigations into finding ways to ‘cluster’ groups of pipes together according to some notion of similarity is a natural avenue for future work in order to reduce the number of GAMS we need to fit.This experience enabled students to apply skills in statistical modelling, algorithm development, and software development to a salient problem faced by an industry partner and marked a successful start to the Compass industry focus lab series.

## Student perspectives: Three Days in the life of a Silicon Gorge Start-Up

A post by Mauro Camara Escudero, PhD student on the Compass programme.

Last December the first Compass cohort partook a 3-day entrepreneurship training with SpinUp Science. Keep reading and you might just find out if the Silicon Gorge life is for you!

### The Ambitious Project of SpinUp Science

SpinUp Science’s goal is to help PhD students like us develop an entrepreneurial skill-set that will come in handy if we decide to either commercialize a product, launch a start-up, or choose a consulting career.

I can already hear some of you murmur “Sure, this might be helpful for someone doing a much more applied PhD but my work is theoretical. How is that ever going to help me?”. I get that, I used to believe the same. However, partly thanks to this training, I changed my mind and realized just how valuable these skills are independently of whether you decide to stay in Academia or find a job at an established company.

Anyways, I am getting ahead of myself. Let me first guide you through what the training looked like and then we will come back to this!

### Day 1 – Meeting the Client

The day started with a presentation that, on paper, promised to be yet another one of those endless and boring talks that make you reach for the Stop Video button and take a nap. The vague title “Understanding the Opportunity” surely did not help either. Instead, we were thrown right into action!

Ric and Crescent, two consultants at SpinUp Science, introduced us to their online platform where we would be spending most of our time in the next few days. Our main task for the first half-hour was to read about the start-up’s goals and then write down a series of questions to ask the founders in order to get a full picture.

Before we knew it, it was time to get ready for the client meeting and split tasks. I volunteered as Client Handler, meaning I was going to coordinate our interaction with the founders. The rest of Compass split into groups focusing on different areas: some were going to ask questions about competitors, others about the start-up product, and so on.

As we waited in the ZOOM call, I kept wondering why on earth I volunteered for the role and my initial excitement was quickly turning into despair. We had never met the founders before, let alone had any experience consulting or working for a start-up.

Once the founders joined us, and after a wobbly start, it became clear that the hard part would not be avoiding awkward silences or struggling to get information. The real challenge was being able fit all of our questions in this one-hour meeting. One thing was clear: clients love to talk about their issues and to digress.

After the meeting, we had a brief chat and wrote down our findings and thoughts on the online platform. I wish I could say we knew what we were doing, but in reality it was a mix of extreme winging and following the advice of Ric and Crescent.

Last on the agenda, was a short presentation where we learned how to go about studying the market-fit for a product, judge its competitors and potential clients, and overall how to evaluate the success of a start-up idea. That was it for the day, but the following morning we would put into practice everything we had learned up to that point.

### Day 2 – Putting our Stalking Skills to good use

The start-up that we were consulting for provides data analysis software for power plants and was keen to expand in a new geographical area. Our goal for the day was therefore to:

• understand the need for such a product in the energy market

• research what options are available for the components of their product

• find potential competitors and assess their offering

• find potential clients and assess whether they already had a similar solution implemented

• study the market in the new geographical area

This was done with a mix of good-old Google searches and cold-calling. It was a very interesting process as in the morning we were even struggling to understand what the start-up was offering, while by late afternoon we had a fairly in-depth knowledge of all the points above and we had gathered enough information to formulate more sensible questions and to assess the feasibility of the start-up’s product. One of the things I found most striking about this supervised hands-on training is that as time went on I could clearly see how I was able to filter out useless information and go to the core of what I was researching.

To aid us in our analyses, we received training on various techniques to assess competitors, clients and the financial prospect of a start-up. In addition, we also learned about why the UK is such a good place to launch a start-up, what kind of funding is available and how to look for investors and angels.

Exhausted by a day of intense researching, we knew the most demanding moments were yet to come.

### Day 3 – Reporting to the Client

The final day was all geared towards preparing for our final client meeting. Ric and Crescent taught us how to use their online platform to perform PESTEL and SWOT analyses efficiently based on the insights that we gathered the day before. It was very powerful seeing a detailed report coming to life using inputs from all of our researches.

With the report in hand, several hours of training under our belt, and a clearer picture in our head, we joined the call and each one of us presented a different section of the report, while Andrea was orchestrating the interaction. Overall, the founders seemed quite impressed and admitted that had not heard of many of the competitors we had found. They were pleased by our in-depth research and, I am sure, found it very insightful.

### Lessons Learned

So, was it useful?

I believe that this training gave us a glimpse of how to go about picking up a totally new area of knowledge and quickly becoming an expert on it. The time constraint allowed us to refine the way in which we filter out useless information, to get to the core of what we are trying to learn about. We also worked together as a team towards a single goal and we formulated our opinion on the start-up. Finally, we had two invaluable opportunities to present in a real-world setting and to handle diplomatically the relationship with the client.

In the end, isn’t research all about being able to pickup new knowledge quickly, filter out useless papers, working together with other researchers to develop a method and present such results to an audience?

Find out more about Mauro Camara Escudero and his work on his profile page.

## Student perspectives: The Elo Rating System – From Chess to Education

A post by Andrea Becsek, PhD student on the Compass programme.

If you have recently also binge-watched the Queen’s Gambit chances are you have heard of the Elo Rating System. There are actually many games out there that require some way to rank players or even teams. However, the applications of the Elo Rating System reach further than you think.

### History and Applications

The Elo Rating System 1 was first suggested as a way to rank chess players, however, it can be used in any competitive two-player game that requires a ranking of its players. The system was first adopted by the World Chess Federation in 1970, and there have been various adjustments to it since, resulting in different implementations by each organisation.

For any soccer-lovers out there, the FIFA world rankings are also based on the Elo System, but if you happen to be into a similar sport, worry not, Elo has you covered. And the list of applications goes on and on: Backgammon, Scrabble, Go, Pokemon, and apparently even Tinder used it at some point.

Fun fact: The formulas used by the Elo Rating make a famous appearance in the Social Network, a movie about the creation of Facebook. Whether this was the actual algorithm used for FaceMash, the first version of Facebook, is however unclear.

All this sounds pretty cool, but how does it actually work?

### How it works

We want a way to rank players and update their ranking after each game. Let’s start by assuming that we have the ranking for the two players about to play: $\theta_i$ for player $i$ and $\theta_j$ for player $j$. Then we can compute the probability of player $i$ winning against player $j$ using the logistic function:

$P(Y_{ij}=1)=\frac{1}{1+\exp\{-(\theta_i-\theta_j)\}}.$

Given what we know about the logistic function, it’s easy to notice that the smaller the difference between the players, the less certain the outcome as the probability of winning will be close to $0.5$.

Once the outcome of the game is known, we can update both players’ abilities

$\theta_{i}:=\theta_{i}+K(Y_{ij}-P(Y_{ij}=1))$

$\theta_{j}:=\theta_{j}+K(P(Y_{ij}=1)-Y_{ij}).$

The $K$ factor controls the influence of a player’s performance on their previous ranking. For players with high rankings, a smaller $K$ is used because we expect their abilities to be somewhat stable and hence their ranking shouldn’t be too heavily influenced by every game. On the other hand, players with low ability can learn and improve quite quickly, and therefore their rating should be able to fluctuate more so they have a larger $K$ number.

The term in the brackets represents how different the actual outcome is from the expected outcome of the game. If a player is expected to win but doesn’t, their ranking will decrease, and vice versa. The larger the difference, the more their rating will change. For example, if a weaker player is highly unlikely to win, but they do, their ranking will be boosted quite a bit because it was a hard battle for them. On the other hand, if a strong player is really likely to win because they are playing against a weak player, their increase in score will be small as it was an easy win for them.

### Elo Rating and Education

As previously mentioned, the Elo Rating System has been used in a wide range of fields and, as it turns out, that includes education, more specifically, adaptive educational systems 2. Adaptive educational systems are concerned with automatically selecting adequate material for a student depending on their previous performance.

Note that a system can be adaptive at different levels of granularity. Some systems might adapt the homework from week to week by generating it based on the student’s current ability and update their ability once the homework has been completed. Whereas other systems are able to update the student ability after every single question. As you can imagine, using the second system requires a fairly fast, online algorithm. And this is where the Elo Rating comes in.

For an adaptive system to work, we need two key components: student abilities and question difficulties. To apply the Elo Rating to this context, we treat a student’s interaction with a question as a game where the student’s ranking represents their ability and the question’s ranking represents its difficulty. We can then predict whether a student of ability $\theta_i$ will answer a question of difficulty $d_j$ correctly using

$P(\text{correct}_{ij}=1)=\frac{1}{1+\exp\{-(\theta_i-d_j)\}}.$

and the ability and difficulty can be updated using

$\theta_{i}:=\theta_{i}+K(\text{correct}_{ij}-P(\text{correct}_{ij}=1))$

$d_{j}:=d_{j}+K(P(\text{correct}_{ij}=1)-\text{correct}_{ij}).$

So even if you only have $10$ minutes to create an adaptive educational system you can easily implement this algorithm. Set all abilities and question difficulties to $0$, let students answer your questions, and wait for the magic to happen. If you do have some prior knowledge about the difficulty of the items you could of course incorporate that into the initial values.

One important thing to note is that one should be careful with ranking students based on their abilities as this could result in various ethical issues. The main purpose of obtaining their abilities is to track their progress and match them with questions that are at the right level for them, easy enough to stay motivated, but hard enough to feel challenged.

### Conclusion

So is Elo the best option for an adaptive system? It depends. It is fast, enables on the fly updates, it’s easy to implement, and in some contexts, it even has a similar performance to more complex models. However, there are usually many other factors that can be relevant to predicting student performance, such as the time spent on a question or the number of hints they use. This additional data can be incorporated into more complex models, probably resulting in better predictions and offering much more insight. At the end of the day, there is always a trade-off, so depending on the context it’s up to you to decide whether the Elo Rating System is the way to go.

Find out more about Andrea Becsek and her work on her profile page.

1. Elo, A.E., 1978. The rating of chessplayers, past and present. Arco Pub.
2. Pelánek, R., 2016. Applications of the Elo rating system in adaptive educational systems. Computers & Education, 98, pp.169-179.

The University of Bristol is excited to announce IBM Research Europe as a new partner of Compass – the EPSRC Centre for Doctoral Training in Computational Statistics and Data Science. IBM scientists are collaborating with Prof. Robert Allison and Compass PhD student Anthony Stephenson, on a research project entitled Fast Bayesian Inference at Extreme Scale. The project’s aim is to extend Bayesian inference algorithms to the ‘extreme scales’ that many deep learning workloads occupy, by placing more focus on AI methodologies which furnish both an accurate prediction, and critically, a high-quality uncertainty representation for predictions.

For more than seven decades, IBM Research has defined the future of information technology with more than 3,000 researchers in 19 locations across six continents. Scientists from IBM Research have produced six Nobel Laureates, 10 U.S. National Medals of Technology, five U.S. National Medals of Science, six Turing Awards, 19 inductees in the National Academy of Sciences and 20 inductees into the U.S. National Inventors Hall of Fame

IBM has European research locations in Switzerland (Zurich), England (Hursley and Daresbury), and Ireland (Dublin), with a large development lab in Germany focused on AI, quantum computing, security and hybrid cloud.

IBM’s global labs are involved hundreds of joint projects with universities particularly throughout Europe, in research programs established by the European Union and the local governments, and in cooperation agreements with research institutes of industrial partners.

Compass is a 4-year PhD training programme focusing on Computational Statistics and Data Science. This new venture is part of the Compass mission to promote academic and professional agility in its students, equipping them with the skills and experience to work across disciplines in academia and beyond.

Anthony Stephenson is the PhD student recruited to this project says, “After several years working in industry, I am pleased to be starting the Compass programme and shifting my focus to research. Having the combined forces of the University of Bristol and IBM behind me inspires confidence and I look forward to working with members of each of them. My project, scalable inference in non-linear Bayesian models, is also a highly relevant and exciting area to work on, with many applications in modern machine learning.”

Dr Ed Pyzer-Knapp is World-Wide IBM Research Lead in AI Enriched Modelling and Simulation and says, “I am very excited to work with Anthony and Robert – scaling Bayesian inference is a really important area of machine learning research; bringing to bear our mantra of fusing of bits and neurons to further develop the future of computing. This project is a great opportunity to further strengthen our relationship with the University of Bristol.”

Prof Robert Allison is Anthony’s academic supervisor at the University of Bristol and says, “I’m really looking forward to working with Anthony and Ed on a highly important and widely applicable area of machine learning which encompasses mathematical research, data-analysis, algorithm development and efficient large-scale computation. In addition, I see this project as an ideal opportunity to seed wider ranging data-science and machine learning collaborations between IBM Research, their academic partners and the University of Bristol.”

As Director of Compass, Prof Nick Whiteley say “I’m absolutely delighted to welcome IBM Research to Compass. This project is a fantastic opportunity for Anthony to tackle a very challenging and increasingly important AI research problem under Prof. Allison and Dr. Pyzer-Knapp’s supervision. As this collaboration develops, I look forward to all Compass students learning about IBM’s vision for the future of AI and its connection to the expertise in statistical methodology and computing they will acquire through the Compass training programme.”

## Improbable sponsors Compass PhD student in new partnership

Improbable, a global technology company which provides innovative products and services to makers of virtual worlds and simulations, is sponsoring a PhD research project entitled Agent-based model calibration using likelihood-free inference.

The University of Bristol is announcing a new industrial sponsor of Compass – the EPSRC Centre for Doctoral Training in Computational Statistics and Data Science. Improbable, a global technology company which provides innovative products and services to makers of virtual worlds and simulations, is sponsoring a PhD research project entitled Agent-based model calibration using likelihood-free inference. The project’s aim is to devise a general framework for calibrating agent-based models from training data by inferring the model parameters in a statistical framework.

## Sparx joins as Compass’ newest industrial partner

The University of Bristol is today announcing a new supporter of Compass – the EPSRC Centre for Doctoral Training in Computational Statistics and Data Science. South West based learning technology company Sparx, has agreed to sponsor a PhD student’s research project which will investigate new approaches to longitudinal statistical modelling within school-based mathematics education.

Sparx, which is located in Exeter, develops maths learning tools to support teaching and learning in secondary education. As an evidence-led company, Sparx has invested heavily in researching how technology can support the teaching and learning of maths and worked closely with local schools. This new investment underlines their ongoing commitment to research.

## Compass welcomes first cohort

Twelve industry partners joined our new students at our Centre for Doctoral Training launch event.

The EPSRC CDT Compass welcomed its new students and external partners from industry and government agencies for the inaugural Partner Day on 1 October 2019.

The Engineering and Physical Sciences Research Council awarded the School of Mathematics over £6 million to launch the new Centre for Doctoral Training (CDT).   The first intake of nine students registered in September to embark on this innovative 4-year PhD programme which combines training and research to address challenges across science, industry, and society.

Representatives from the external partner organisations visited the newly remodelled home of the School of Mathematics in the Fry Building.   The Compass team learned about partner’s work in Data Science ranging from smart apps for mathematical education to forecasting tools for energy systems.  We welcomed representatives from:

Adarga | AWE| CheckRisk | EDF | GSK | LV | Office for National Statistics | Malvern Panalytical | UK Space Agency | SCIEX| SPARX | Trainline

Dr. Nick Whiteley, Compass Director commented: “I am thrilled to see our new students embark on the Compass programme and take their first steps in research. It was a pleasure to welcome our partners to the wonderful Fry Building, where they shared thought-provoking insights into the work they do and the statistical challenges they face. Our partners play a key role in enriching the training Compass students receive and the research they conduct.”

Jasmine Grimsley from Office for National Statistics said:

“The Data Science Campus at the Office for National Statistics plans for work with Compass Students on projects that use data science for public good.  Students have the opportunity to come and work with us on industry placement or be able to work on real world problems for their theses topic.”