Research Highlights
Full research profile and academic publications available at Google Scholar, ResearchGate & ORCID.
21st century medicine & emerging biotechnological syndromes: a cross-disciplinary systematic review of novel patient presentations in the age of technology (BMC Digital health, 2023)
Our paper provides the first systematic review all case reports describing illnesses related to digital technology over the past ten years, through which we identify novel biotechnological syndromes and map out new causal pathways of disease. We identify significant gaps in medical care that have disadvantaged a community of patients suffering from these digital complaints.
Sex-Based Performance Disparities in Machine Learning Algorithms for Cardiac Disease Prediction: Exploratory Study (JMIR, 2024)
Our research exposes a significant gap in cardiac ML research, highlighting that the underperformance of algorithms for female patients has been overlooked in the published literature. We found an underrepresentation of female patients in the data sets used to train algorithms, identified sex biases in model error rates, and demonstrated that a series of remediation techniques were unable to address the inequities present.
Connected to the cloud at time of death: a case report (JMCR, 2024)
Our case report provided the first clinical evaluation of autopsy practices for a patient death that occurs on the cloud. Through the story of this patient, we examined how autopsy practices may require adaptation for a death that presents via the ‘Internet of Things’, evaluating whether existing guidelines capture data related to death which is no longer confined to the patient's body.
Insights From a Clinically Orientated Workshop on Health Care Cybersecurity and Medical Technology: Observational Study and Thematic Analysis (JMIR, 2024)
Our findings are derived from an internationally-attended workshop on healthcare cybersecurity, that engaged healthcare professionals, cybersecurity experts, security and intelligence officials, policy-makers and academics. We identified key challenges faced by frontline health care workers during digital events. Clinicians reported novel forms of harm related to technology (eg, geofencing in domestic violence and errors related to interconnected fetal monitoring systems) and barriers impeding adverse event reporting.
When brain devices go wrong: a patient with a malfunctioning deep brain stimulator (DBS) presents to the emergency department (BMJ Case Reports, 2022)
In this paper we tell the story of an acutely unwell patient who presented to the Emergency Department with a malfunctioning deep brain stimulator. Through this case, we describe the challenges encountered by clinicians when trying to treat a condition stemming from a technological failure, and expose significant gaps in current medical guidance for patients suffering from these conditions.
Research Group & Outputs
I lead the CRASH AI research group at UCL, which examines the Cybersecurity Resiliency and Safety of Healthcare Artificial Intelligence. We are are located in the UCL Faculty of Population Health Sciences, ranked #3 in the world for public health. We also work closely with the Gender and Tech Research Lab at UCL Computer Science, and the Information Security (InfoSec) group (a UK Academic Centre of Excellence in Cyber Security Research).
-
MsC Students are currently being allocated for 2025, and I am recruiting students for projects focused on modelling NHS cyberattacks, and evaluating safety & fairness issues in AI models used for predicting cancer outcomes.
Previous Students / Alumni
-
Yang Li, MsC Student, An Edge System for Medical Internet of Things (Mac Supervision) > PhD at Kings College London.
Artificial Intelligence in mental health and the biases of language based models (PLOS ONE, 2020)
Our study evaluated bias in NLP models used in psychiatry and discussed how these biases may widen health inequalities. Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. For instance, when asked the analogy question “British is to Depression, as Irish is to _ (W4)?”, the model returns the result ‘alcoholism’.
Simulation-based research for digital health pathologies: A multi-site mixed-methods study (Digital health, 2024)
In this paper we report the outcomes of a NHS simulation study performed across multiple sties, engaging clinicians from around the UK to take part in medical scenarios involving technological failures. We evaluate the ability of healthcare professionals to respond to software, hardware and connectivity failures in implanted devices, and assessed their response to challenging scenarios of tech-abuse. Our recommendations are relevant to educators, practising clinicians and professionals working in regulation, policy and industry.
Representational ethical model calibration (Nature Digital Medicine, 2022)
Through this research we demonstrate a novel approach for uncovering and modelling algorithmic biases in medical machine learning, evaluating model performance across multidimensional representations of identity with a dataset from the UK Biobank. We offer our approach as a principled solution to quantifying and assuring epistemic equity in healthcare, with applications across the research, clinical, and regulatory domains.
Safeguarding patients from technology-facilitated abuse in clinical settings: A narrative review (PLOS ONE, 2020)
Tech-abuse refers to the misuse of digital systems such as smartphones or other Internet-connected devices to monitor, control and intimidate individuals. In this paper we examined the existing literature on technology-facilitated abuse in clinical settings and evaluate the safeguarding guidance that is available to healthcare practitioners working with vulnerable groups. We identify that this is a largely neglected issue in the clinical literature and that safeguarding protocols fail to account for evolving harms associated with novel technologies.
Policy Highlights
In addition to academic publications I have engaged with policy makers in national and international governmental bodies, contributing to emerging guidance on the regulation of Artificial Intelligence, and management of healthcare cybersecurity threats.
Workshop Reports
The following policy report summarises an international workshop I co-led as part of the Reg-MedTech project, funded by the PETRAS National Centre of Excellence in IoT Systems Cybersecurity (EPSRC grant number EP/S035362/1), in collaboration with project partners at the BSI, the UK’s National Standards Body.
-
Brass, I., Straw, I., Mkwashi, A., Charles, I., Soares, A., Steer, C. (2023) Emerging Digital Technologies in Patient Care: Dealing with connected, intelligent medical device vulnerabilities and failures in the healthcare sector. Workshop Report. London: PETRAS National Centre of Excellence in IoT Systems Cybersecurity. DOI: 10.5281/zenodo.8011139. Read here.
Contribution to UN policies and recommendations
-
Expert Author: Van Niekerk, D., Peréz-Ortiz, ... & Aneja, U. "Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls". United Nations (UNESCO). March 2024. Read here.
-
Expert Author: Van Niekerk D, Pérez-Ortiz M, ..... Straw I, Chair C, Aneja U, Kay J, and Siegel N. “I don’t have a gender, consciousness, or emotions. I’m just a machine learning model.” International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO, United Nations. 2023. Read here.
-
Contributed to draft and edits during placements in 2019 & 2021 (Paris HQ): United Nations (UNECO). Recommendation on the Ethics of Artificial Intelligence. Read here.
These policy reports at the UN were highlighted in the following press releases:
Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes (UNESCO Press Release, 2024)
On International Women's Day, our research on AI and Gender from a UNESCO study was showcased in this international press release. The project revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping.
UNESCO Study Exposes Gender and Other Bias in AI Language Models (CEPIS, 2024)
This press release from the Council of European Professional Informatics Societies (CEPIS) publicised our research with the UN, exposing the biases embedded within Large Language Models (LLMs), including widely used platforms like GPT-3.5 and GPT-2 by OpenAI, and Llama 2 by META. The work was also released in UCL News here.