Cyber Security research theme

within Computer Science Research Centre (CSRC).

Overview

The Cyber Security research theme covers a wide range of topics within the scope of cyber security, ranging from software, sensors and infrastructure systems security; data security, AI-based security and the security of AI; as well as cyber crime, digital forensics, human aspects of security, and geo-politics of cyber security.

Much of our research is applied in nature, through our extensive partnerships with government and industry, including the National Cyber Security Centre and GCHQ, the Ministry of Defence, and the National Crime Agency, as well as industry collaborators within the telecommunications sector, national security, and defence.

Theme Lead

PhD/DPhil students

 

Current

Aimen Djemaa

Title: Hybrid outlier clustering method using a novel dataset in intrusion detection systems.

Director of Studies: Dr Djamel Djenouri
Supervisor: Professor Phil Legg

Yunus Karrem

Title: Securing IoT systems using emerging blockchain variants, decentralised identity and proof of location.

Abstract: This research proposes a novel blockchain consensus algorithm, PoLBFT, which combines Proof of Location and Practical Byzantine Fault Tolerance to enhance the security of IoT systems. The study examines the inherent insecurity of IoT environments and the limitations of existing centralized solutions, categorizing key security challenges into Privacy, Invulnerability, and Trust. To address these, a decentralized architecture is proposed, leveraging blockchain technologies selected through an evaluation of emerging frameworks. The PoLBFT algorithm is designed to meet the resource constraints of IoT devices while improving the security and efficiency of traditional blockchain consensus methods.

Director of Studies: Dr Djamel Djenouri
Supervisor: Dr Essam Ghadafi (Newcastle University)

Dalila Khettaf

Title: Graph-based group anomaly detection in IoT with deep learning.

Director of Studies: Dr Djamel Djenouri
Supervisor: Dr Zeinab Rezaeifar

Carol Lo

Title: Micro-, meso-, and macro-level detection of Advanced Persistent Threats (APT) in industrial cyber-physical systems: A focus on living-off-the-land techniques.

Abstract: Attacks on industrial Cyber-Physical Systems (CPS) can cause severe and irreversible real-world consequences, as demonstrated by incidents such as the Ukraine power grid attacks. Detecting cyber-physical threats remains challenging, particularly when adversaries employ Living-off-the-Land (LOTL) techniques that abuse legitimate tools, protocols, and system functions to evade conventional security controls. This thesis addresses the problem of detecting stealthy, multi-stage LOTL-based Advanced Persistent Threats (APT) in industrial CPS environments. To enable safe, repeatable, and realistic experimentation, this research develops a set of simulation-based cyber-physical testbeds and proposes a multi-level detection framework that integrates evidence across process, network, and host domains. Unlike conventional detection approaches that analyse these domains independently, this thesis introduces a unified two-level decision fusion strategy that correlates heterogeneous cyber and physical evidence under conditions of partial observability. The proposed detection framework operates across three analytical levels. At the micro-level, an interpretable supervised deep learning anomaly detection method performs multi-class classification of CPS process states, enabling detection of control manipulation beyond conventional binary and rule-based approaches. At the meso-level, a risk-based unsupervised learning approach identifies high-risk OPC UA write operations capable of directly manipulating physical processes, while an interpretable point-based scoring mechanism aggregates heterogeneous host telemetry into time-bounded anomaly scores to amplify weak and low-and-slow attack signals. At the macro-level, a two-level decision fusion process correlates heterogeneous evidence across time and domains: the first fusion layer integrates process and network anomalies, and the second incorporates host anomalies to enhance situational awareness even when individual alerts are weak or ambiguous. Literature revealed significant challenges in accessing facilities that support cyber-physical experimentation, and the scarcity of representative APT datasets. These challenges motivated this research to investigate the design and implementation of simulation-based testbeds for the safe and repeatable simulation of multi-stage LOTL-based attacks, and the development and evaluation of detection strategies (RQ1). The experimental results on two case studies related to quality-checking and pick-and-place processes demonstrate that process- and network-level anomalies provide reliable confirmation of physical manipulation in CPS, but are limited in their ability to provide early warning, whereas host-level anomalies offer earlier indicators of cyber-initiated intrusion (RQ2 - RQ3). By fusing heterogeneous evidence across process, network, and host domains, the proposed decision fusion mechanism significantly improves detection robustness, timeliness, interpretability, and situational awareness compared to single-layer detection approaches (RQ4). The framework functions as an additional analytical layer that complements existing security architectures. It provides a practical and extensible foundation for human-in-the-loop CPS security operations under industrial constraints, supporting earlier and more interpretable detection of cyber actions that escalate into physical consequences. The findings contribute to CPS security research by establishing a cross-domain decision fusion framework under partial observability, highlighting the necessity of risk-aware and interpretable analytics for human-in-the-loop operations, and informing governance and industrial practice by demonstrating the importance of continuous cross-domain monitoring as a complement to preventive-centric security mechanisms in industrial CPS environments.

Director of Studies: Professor Phil Legg
Supervisors: Dr Thomas Win (Sunderland University), Dr Zeinab Rezaeifar and Professor Zaheer Khan

Rania Louadj

Title: Privacy preservation of LLMs in IoT device security.

Director of Studies: Dr Djamel Djenouri
Supervisor: Dr Majid Mumtaz

Alan Mills

Title: Investigation of limited resource virtualisation security: Understanding and mitigating against open software vulnerabilities.

Director of Studies: Professor Phil Legg
Supervisor: Dr Sarfraz Brohi

Romia Nasifa

Title: Security aware adaptive offloading for resource-constrained UAV systems using homomorphic encryption.

Director of Studies: Dr Ahmad Salman
Supervisors: Dr Amina Hamoud (School of Engineering, UWE Bristol) and Dr Sarfraz Brohi

Patrick Robinson

Title: Securing robotics in practice: Developer workflows, system architecture, and the ROS2 ecosystem.

Abstract: Secure systems emerge from the intersection of technology, theoretical understanding, and engineering practice, therefore security systems research must fundamentally account for these three areas and their interactions. This research targets real world developer workflows and the tools they use to improve the security posture of robotics applications that are developed by companies and research organisations with constrained resources, limited in their abilities or proclivities to apply formal modelling and verification techniques. In doing so, it will contribute practical and realistic solutions that can be adopted by engineering communities, that are constructive to the cultures and the processes that have found widespread success within those communities and robotics ecosystem.

The project focuses on distributed robotic systems built on the ROS2 (Robotics Operative System 2) middleware, where security risks often emerge from architectural and discovery-level design decisions rather than from isolated software defects, and the design philosophy of enabling quick integration and prototyping at the expense of formal modelling has contributed to its popularity as a research and industry tool. This research proceeds from the position that architecture encodes and constrains practice, therefore careful examination of the pedagogical and epistemological structures implicit in the technology is of paramount importance to the design of appropriate interventions for improving security outcomes. Co-constitutive to the ROS2 system are the developer attitudes, priorities and mental models that determine what methods, approaches and abstractions are adopted and considered useful. Therefore, foundational to this research will be careful and extensive evaluation of developer workflow realities through inductive user studies, involving surveys and semi-structured interviews with professionals who develop and deploy software for robotics and autonomous systems.

The overarching aim of this research is to enable secure-by-design development of robotics and cyber-physical systems by introducing systematic, architecture-level methods for reasoning about security properties early in the design lifecycle.

Director of Studies: Dr Nathan Renney
Supervisor: Dr Dan Withey (Bristol Robotics Laboratory)

Mia Smith

Title: System-level vulnerability and security analysis of closed-loop implantable medical devices.

Director of Studies: Professor Phil Legg
Supervisor: Dr Thomas Draper (Healthtech Hub)

Michael Yamsi Tchuindjang

Title: A defence model for large language models (LLMs) against multi-turn jailbreak attacks.

Director of Studies: Dr Nathan Duran
Supervisor: Dr Faiza Medjek and Professor Phil Legg

Yen Wang

Title: Rust binary analysis framework (RBAF): A hybrid LLVM-IR based approach for effective decomposition of Rust-based binaries.

Abstract: As the cybersecurity landscape continues to evolve, attackers are increasingly exploiting Rust's cross-platform capabilities and unique features to create highly resilient malware. New emerging variants written in different languages can keep causing challenges as well, such as Zig. Recovering high-level type information from binaries is crucial for security analysis, vulnerability discovery, and legacy system maintenance. However, compilation often strips away symbols and type information, making it more difficult to analyse Rust-based malware. This study aims to bridge the research gap by exploring promising analysis strategies for Rust-based malware and providing information on the unique challenges posed by this emerging threat.

Director of Studies: Dr Benedict Gaster
Supervisors: Dr Nathan Renny and Professor Phil Legg

Jonathan White

DPhil title: Federated learning: An analysis into the balance of machine learning and security.

Director of Studies: Professor Phil Legg
Supervisor: Professor Jim Smith

Recent

James Barrett (completed 2026)

Title: Interactive machine learning for identifying threats to security and service in large-scale mobile networks.

Abstract: The use of machine learning for predictive analytics is an expansive and continually changing field, in concern to both real, and non-real time environments. Ensuring effective security alerting practices and providing consistent service with machine learning-based services has emerged as a competitive and continually improving area, with balance proving to be a developing challenge. Existing research in interactive, and explainable based machine learning systems to serve service and cybersecurity purposes has seen significant growth, however also dawning concerns and challenges to performance-based needs and privacy considerations. largely unexplored is the adaptive, and interactive usage of novel interactive and explainable machine learning methods to continually adapt and serve the needs of predictive systems, deterring threats pre-emptively and ensuring the service and safety to large scale network systems, of which our application area is telecommunications. 

Director of Studies: Professor Phil Legg
Supervisor: Professor Jim Smith
Industry partnership: Ribbon Communications

Sadegh Bamohabbat Chafjiri (completed 2025)

Title: Advancing software fuzzing techniques through the exploration of cryptographic concepts and machine learning.

Abstract: Modern software and networks are the backbone of our digital society, yet they are increasingly susceptible to security vulnerabilities that malicious actors may exploit. Effectively addressing these vulnerabilities necessitates proactive and automated strategies to identify and mitigate risks, particularly within large-scale datasets. Fuzzing has emerged as a pivotal technique in this field; however, traditional methods encounter significant challenges related to deep bug discovery, input quality, and scalability. Leveraging machine learning (ML) techniques, including advanced architectures such as Long Short-Term Memory (LSTM), Generative Adversarial Networks (GANs), and Gated Recurrent Units (GRUs), and categorizing them provides a clear roadmap and perspective for solving the problem. This dissertation begins by categorizing the integration of various machine learning models, including Traditional ML (TML), Deep Learning (DL), Reinforcement Learning (RL), and Deep Reinforcement Learning (DRL), and reviews advancements, methodologies, and challenges in applying these paradigms to fuzzing. Building on this foundation, we propose novel enhancements to fuzzing tools by integrating cryptographic structures. Specifically, we embedded substitution-permutation networks (SPNs) and Feistel networks (FNs) into the custom mutator of the AFL++ framework, referred to as the HonggFuzz library. This resulted in the development of a new custom mutator, HonggFuzz+, which demonstrates improved performance in identifying software bugs and discovering new code edges through optimised search space exploration. Preliminary experimental results, focusing on the number of unique bugs identified across various targets, validate the effectiveness of these methods in diversifying memory region relationships, paving the way for advancements in fuzzing tool development.

In the next stage, we extended the experiments to a wider range of targets and optimised the implementation of Feistel-inspired transformations (Feistel swaps) by integrating them directly into the baseline of AFL++. This approach eliminated the need for a custom mutator while streamlining the integration of cryptographic mutators and enhancing randomisation efficiency. Additionally, we addressed challenges related to code coverage and random number generation (RNG) bias by leveraging a larger-scale benchmark, fuzzbench. We present three innovative fuzzing models—CAFL++, PCGAFL++, and CPCGAFL++. These integrate Feistel-inspired transformations and unbiased RNG mechanisms into AFL++, resulting in enhanced code coverage and stability. Our evaluation across multiple targets highlights the advantages of these approaches, particularly in reducing performance variability and enhancing bug discovery. Finally, we investigate the role of neural network optimisations in fuzzing, employing techniques like LReLU to counteract gradient vanishing issues, Nesterov-accelerated Adaptive Moment Estimation (Nadam) for refined weight updates, and sensitivity analysis for model refinement. These innovations, coupled with game-theoretic insights, demonstrate significant improvements in fuzzing efficacy, achieving better accuracy, edge coverage, and unique bug identification compared to baseline methods. This dissertation thus contributes novel methodologies and insights to advance the state-of-the-art in software fuzzing, enhancing both its effectiveness and reliability in the evolving cybersecurity landscape.

Director of Studies: Professor Phil Legg
Supervisors: Dr Antisthenis Tsompanas and Professor Jun Hong

Khoa Phung (completed 2024)

Title: Integrating communicating X-machines, probabilistic and machine learning models to create a rigorous runtime error detection system for Java programs.

Abstract: Software Fault Prediction (SFP) is crucial for ensuring software quality and optimizing resources. However, current SFP methods often rely on simplistic binary classifications, overlook- ing the complexity and diversity of faults. This study aims to enhance SFP accuracy and utility by focusing on detailed, project-specific fault analysis, particularly considering fault types and quantities. This research aims to:

  1. model Java Runtime Errors (JREs) using Stream X- Machine (SXM), termed Error Specification Machines (ESMs)
  2. generate and quantify JRE test cases from ESMs as ESM values
  3. integrate ESM values with conventional software metrics to improve machine learning algorithms for error-type proneness prediction
  4. utilise ESM values as Error-type Metrics in a novel risk categorisation strategy for SFP.

Director of Studies: Dr Emmanuel Ogunshile
Supervisor: Dr Mehmet Aydin

Andrew McCarthy (completed 2023)

Title: Methods for improving robustness against adversarial machine learning attacks.

Abstract: Machine learning systems can improve the efficiency of real-world tasks, including in the cyber security domain; however, these models are susceptible to adversarial attacks; indeed, an arms race exists between adversaries and defenders. The benefits of these systems have been accepted without fully considering their vulnerabilities, resulting in the deployment of vulnerable machine learning models in adversarial environments. For example, intrusion detection systems are relied upon to accurately discern between malicious and benign traffic but can be fooled into allowing malware onto a networks. Robustness is the stability of performance in well-trained models facing adversarial examples. This thesis tackles the urgent problem of improving the robustness of machine learning models, enabling safer deployments in adversarial domains. The logical outputs of this research are countermeasures against adversarial examples. Original contributions to knowledge are: a survey of adversarial machine learning in the cyber security domain, a generalizable approach for feature vulnerability and robustness assessment, and a constraint-based method of generating transferable functionality-preserving adversarial examples in an intrusion detection domain. Novel defences against adversarial examples are presented: Feature selection with recursive feature elimination, and hierarchical classification. Machine learning classifiers can be used in both visual and non-visual domains. Most research in adversarial machine learning considers the visual domain. A primary focus of this work is how adversarial attacks can be effectively used in non-visual domains, such as cyber security. For example, attackers may exploit weaknesses in an intrusion detection system classifier, enabling an intrusion to masquerade as benign traffic. Easily fooled systems are of limited use in critical areas such as cyber security. In future, more sophisticated adversarial attacks could be used by ransomware and malware authors to evade detection by machine learning Intrusion Detection Systems.  Experiments in this thesis focus on intrusion detection case studies and use Python code and Python libraries: the CleverHans API, and the Adversarial Robustness Toolkit libraries to generate adversarial examples, and the HiClass library to facilitate Hierarchical Classification. An adversarial arms race is playing out in cyber security. Every time defences are improved, adversaries find new ways to breach networks. Currently, one of the most critical holes in defences are adversarial examples. This thesis examines the problem of robustness against adversarial examples for machine learning systems and contributes novel countermeasures, aiming to enable the deployment of machine learning in critical domains.

Director of StudiesProfessor Phil Legg
Supervisors: Dr Essam Ghadafi (Newcastle University) and Dr Panagiotis Andriotis (University of Birmingham)
Industry partnership: Techmodal

Activities

Privacy-preserving machine learning through secure management of data’s lifecycle in distributed systems (REMINDER)

An EPSRC-funded project under the EU CHISTER-ERA framework in collaboration with Universidad de Murcia, SIEMENS Mobility Limited, Austrian Institute of Technology. The project will develop solutions for privacy-preserving machine learning through secure management of data’s lifecycle in general distributed systems, with a focus on IoT and resource constrained networks that address two use cases: smart buildings and smart healthcare. (March 2024 – February 2027)

Knowledge Transfer Partnership with Service Robotics

An Innovate UK-funded project under the KTP framework working with Service Robotics Limited, in collaboration with the School of Computing and Creative Technologies and the School of Social Sciences. The project will develop secure advanced AI enhancements to the GenieConnect robotic healthcare assistant developed by Service Robotics Limited to support proactive and preventative care models. (2025 – 2026)

CyberWEST

A NCSC-funded project led by UWE Bristol in collaboration with Universities of Bristol, Exeter and Plymouth, to design and develop a range of novel and engaging teaching materials for cyber security education, including information risk card games, game-based learning techniques, raspberry pi and micro-bit activities, digital forensics cases, and wireless penetration testing. The project facilitated a series of regional workshops led by each partner University to engage with over 100 school teachers across the South West region, as well as showcasing the teaching activities at the NCSC UK Education Ecosystem conference. (2023 – 2024)

Cyber sandpit: Linking cyber effects to mission objectives

A DSTL-funded project in collaboration with Trimetis and Frazer Nash Consultancy that explores training capabilities for military cyber protect teams and how human and machine-based decision support systems can assist for analysing and acting to protect hostile cyber environments. (January 2023 -October 2023)

Human-as-a-sensor: Crowdsourced cyber security

A DSTL-funded project in collaboration with Trimetis and Frazer Nash Consultancy that explores human reporting mechanisms for suspicious behaviour, and how human reporting can be processed and coupled with machine observable attributes, to provide proactive security for organisations. (January 2023 - October 2023)

Transforming Suspicious Activity Reports (SARs): UK's first technology line of defence

A UKRI Innovate UK-funded project, in collaboration with Synalogik Innovations Ltd, as well as Cardiff University and the University of Reading, to explore improving both the production and analysis of the SARs process, resulting in more efficient capability to investigate and respond to cyber crime and financial crime activity. (September 2022 - March 2024)

Measuring the suitability of AI technologies for autonomous resilience in cyber defence

A DSTL-funded project in collaboration with Trimetis, PA Consulting and QunetiQ. Within this project, we conducted a deep dive investigation into current and future considerations of how AI should be utilised in military, security, and defensive operations, including incident response and training activities. This project served as part of the ongoing "Autonomous Resilience in Cyber Defence" programme that DSTL operate. (2022)

CAVForth

CAVForth, funded by both UKRI Innovate UK and the Government Centre for Connected Autonomous Vehicles (CCAV), in collaboration with the Bristol Robotics Laboratory, Fusion Processing, and Stagecoach, developed a fully autonomous bus service in Scotland. The CSRC Cyber Security team contributed towards the cyber security assessment of this project, to ensure that safe and secure mechanisms are in place for vehicle operations. (2020-2022)

You may also be interested in