Felisia Loukou – Explainability in AI
Felisia Loukou is a Senior Data Scientist at Adarga, working on language modelling for knowledge discovery and developing Adarga’s explainable AI strategy to deliver actionable, reliable insights. Felisia previously worked in the Government Digital Service, where she co-founded GOV.UK Data Labs, a multi-disciplinary team to leverage the organization’s unstructured data estate. Felisia led GOV.UK’s structured content strategy to improve user experience and established the organization’s knowledge graph. Felisia holds Master’s degrees in Computer Science and Natural Language Processing from the University of Oxford and the University of Edinburgh.
Abstract: Explainability enables humans to “understand” the structure, behavior and decisions of machine learning models (or more generally, intelligent systems). Explainability is often used as an umbrella term for different principles in different contexts. This presentation introduces a framework to address these principles in practice:
- Trust, Transparency and Fairness: the ability to characterize the operational and legal needs for explanation, and the corresponding social benefits
- Interpretability: the ability to interpret an AI system, including assessing and quantifying how model structure influences performance
- Explainability: the ability to explain the decisions of an AI system
Dr. Daniel Cowley – Bias in AI
Dr Daniel Cowley is a Principal Data Scientist at Adarga, working on machine learning, natural language processing and knowledge graphs. He has a PhD in applied mathematics from University of Bristol where he specialised in graph theory and modelling information flow across networks.
Abstract: This talk will give an introduction to bias in artificial intelligence. We will define what we mean by bias in an AI system, discuss the different types of bias from observer bias through to confirmation bias. Think about the effects of bias in AI, in particular in terms of allocation of resources and representation of identity. Finally, we will look at some potential technical solutions and identify the areas we should be focussing on in the future.
Dr. Daniel Clarke – Security in AI
Daniel is the Head of Applied Sciences at Adarga, where he leads the development of an Artificial Intelligence framework for enabling knowledge and insights to be drawn from disparate sources of human generated data. Daniel has spent the majority of his career developing advanced techniques for sensor signal processing, sensor fusion and machine learning in a number of technology areas, including cyber security, autonomous driving and most recently natural language processing.
Abstract: The techniques and methodologies which make up modern Artificial Intelligence have the potential to accelerate a number of different and societally important technologies. When an analytical solution either does not exist, or is too complicated we develop a data driven, machine learning solution . However, the numerical nature of many of these algorithms introduces challenges where the algorithms naively generate false results using a false input stimulus. In this presentation we introduce this concept in machine learning and articulate some of the challenges this introduces for the secure and acceptable deployment of machine learning.
About Adarga
“Adarga’s powerful AI platform uses AI, NLP and machine learning techniques to read, understand and analyse all of your data, giving you the knowledge you need to make crucial decisions. Adarga’s vision is to empower us all to realise the full potential of available knowledge. Our mission is to enhance your ability to use information to make better decisions today.”
See also Adarga’s website and LinkedIn page.
Non-Compass attendees
We have a limited space for non-Compass attendees. To register for this and future events, please submit this registration form.