Back to Home

All Projects

eSAM — Exposure Sensing Animated Mannequin

📅 Sep 2022 - Feb 2024 👤 Lead Engineer

Human-sized robotic mannequin for chemical exposure testing with distributed sensors

RoboticsEmbedded SystemsSensor NetworksControl SystemsIndustrial Automation

Designed and developed a human-sized robotic mannequin for chemical exposure testing and environmental sensing at the University of Hertfordshire.

System Overview

A fully articulated robotic platform that simulates human movements and postures while collecting comprehensive environmental and exposure data through distributed chemical sensors.

Key Components

  • Mechanical Design: Human-proportioned structure with multiple degrees of freedom
  • Motor Control: Precision control using Maxon motor drivers for realistic motion
  • Sensor Network: Distributed chemical sensors across the mannequin body
  • Communication: Modbus RTU protocol across Arduino and Raspberry Pi modules
  • Web Interface: Real-time monitoring and control dashboard for remote operation

Technologies

Python, Arduino, Raspberry Pi, Modbus RTU, Maxon Motors, Control Systems, Distributed Sensors, Web Interface

Applications

  • Personal protective equipment (PPE) testing
  • Workplace exposure assessment
  • Chemical safety research
  • Environmental monitoring studies
  • Ergonomics and safety protocol development

Technical Achievements

  • Seamless integration of heterogeneous embedded systems
  • Reliable distributed sensor data collection
  • Intuitive web-based control interface
  • Repeatable and programmable motion sequences
  • Real-time data visualization and analysis

EEG & Facial Emotion Data Collection Platform

📅 Feb-Oct 2025 👤 Lead Developer

Web-based platform for synchronized multimodal emotion data collection with EEG and facial recognition

EEGEmotion RecognitionVision TransformerWeb PlatformReactNode.jsMUSE 2

Developed a comprehensive web-based platform for synchronised multimodal emotion data collection, integrating EEG (MUSE 2), facial emotion recognition, and human feedback for large-scale emotion studies.

Key Features

  • Real-Time EEG Streaming: Integration with MUSE 2 headset for live brainwave data collection
  • Facial Emotion Recognition: Vision Transformer (ViT) based emotion inference from video
  • Synchronized Data Collection: Temporal alignment of EEG, facial expressions, and self-reported emotions
  • Large-Scale Study Support: Designed for 250+ participants studying emotional responses to video content
  • Multi-Dataset Training: ViT classifiers fine-tuned on FER2013, RAF-DB, AffectNet, and CK+ datasets

Technologies

Python, JavaScript, React, Node.js, PyTorch, MUSE 2 SDK, Vision Transformer (ViT), FER2013, RAF-DB, AffectNet, CK+

Technical Highlights

  • Web-based architecture for remote participation
  • Real-time data streaming and processing
  • Robust emotion classification using state-of-the-art ViT models
  • Comprehensive data annotation and quality control pipeline
  • Secure storage and privacy-preserving data handling

Applications

The platform enables research in affective computing, emotion AI, and mental health monitoring by providing high-quality multimodal emotion datasets.

Data Study Group (DSG) — Advanced Manufacturing

📅 Dec 2022 👤 Facilitator

Collaborative data science research sprint addressing industrial manufacturing challenges

Data ScienceMachine LearningData AugmentationSynthetic DataPCASVD

Facilitated a collaborative research sprint at the Alan Turing Institute with multidisciplinary teams, tackling industrial challenges in advanced manufacturing using data-driven approaches for AMRC (Advanced Manufacturing Research Centre).

Project Objectives

Addressing challenges in manufacturing data analysis with focus on:

  • Low-frequency data problems
  • Sparse dataset analysis
  • Data augmentation strategies
  • Predictive modeling for quality control

Contributions

  • Data Augmentation: Applied advanced techniques to improve analysis robustness in sparse datasets
  • Synthetic Data Generation: Developed methods for creating realistic synthetic samples
  • Dimensionality Reduction: Implemented PCA and SVD for feature extraction
  • Team Facilitation: Coordinated multidisciplinary collaboration and knowledge sharing
  • Report Co-Authoring: Contributed to published outcomes and recommendations

Technologies

Python, Data Analysis, PCA, SVD, Data Augmentation, Synthetic Data Generation

Impact

The project provided actionable insights for AMRC partners and demonstrated effective approaches for dealing with challenging industrial datasets common in manufacturing environments.

Project Link: Alan Turing Institute - Data Study Groups

Key Takeaways

  • Innovative solutions for sparse and low-frequency data challenges
  • Cross-domain collaboration between academia and industry
  • Practical application of advanced data science techniques
  • Rapid prototyping and validation methodologies

Hospital@Home / PRIME Study

📅 2024-Present 👤 Lead AI Researcher

AI-driven virtual ward system for ambient assistive care with multimodal patient monitoring

Healthcare AIMulti-View HARHRISensor FusionAmbient IntelligenceDeep Learning

Developing an AI-driven multimodal pipeline for ambient assistive technology in virtual wards supporting post-surgery and heart failure recovery at the University of Hertfordshire.

Key Features

  • Multi-View Human Activity Recognition: Designing advanced HAR systems using multiple camera viewpoints for robust patient monitoring
  • Sensor Fusion Framework: Integrating multiple sensor modalities for comprehensive health and activity assessment
  • Human-Robot Interaction: Creating HRI scenarios to enhance patient engagement and provide personalized assistance
  • Adaptive Assistance: Real-time monitoring and intervention system adapting to patient needs
  • Healthcare Integration: Bridging healthcare and social care systems

Collaboration

  • Princess Alexandra Hospital
  • European research partners
  • Funded by Dinwoodie Charitable Company Research Grant (2025-2027)

Technologies

Python, PyTorch, ROS2, Deep Learning, Sensor Fusion, Multi-View HAR, HRI, Ambient Intelligence, Healthcare AI

Impact

The system enables patients to recover at home with continuous AI-powered monitoring and robotic assistance, reducing hospital stays while maintaining high-quality care standards.

More info: Robotics Research Group, UH

SWAG — Soft Wearable Assistive Garment

📅 2024-Present 👤 Lead Researcher

AI-driven intent-detection system using multi-sensor biomechanical data for wearable robotics

Deep LearningSensor FusionMeta-LearningTransformersEMGIMUWearable RoboticsEdge AI

Developing AI-driven intent-detection systems using multi-sensor biomechanical data (EMG, IMU, kinetics, kinematics) with meta-learning and Transformer architectures at the University of Hertfordshire.

Key Contributions

  • Deep Learning Framework: Designed and implemented a sensor fusion framework for real-time motion recognition and adaptive human-robot interaction
  • Meta-Learning: Applied meta-learning techniques for rapid adaptation to new users and movement patterns
  • Transformer Architectures: Developed attention-based models for temporal pattern recognition in biosignals
  • Dataset Creation: Led data collection campaigns and created benchmark datasets for wearable sensor research
  • European Collaboration: Working with partners across Europe on system integration and evaluation

Technologies

Python, PyTorch, CUDA, ROS2, HRNet, YOLOv7, Meta-Learning, Transformers, Edge AI

Impact

The project aims to create assistive wearable garments that can predict user intent and provide appropriate assistance for people with mobility impairments, enabling more natural and intuitive human-robot collaboration.

More info: SWAG Project Website | Robotics Research Group, UH

Multi-View Human Activity Recognition (RHM-HAR Series)

📅 2020-Present 👤 Principal Investigator

Comprehensive multi-view HAR research with benchmark datasets and advanced AI architectures

Human Activity RecognitionMulti-View VisionTransformerAgentic AIComputer VisionPyTorch

Leading ongoing research in human activity recognition via multi-view camera systems, initiated during PhD and extended through postdoctoral collaborations at the University of Hertfordshire.

Dataset Contributions

RHM-HAR-SK: Multi-view skeleton-based activity recognition dataset with:

  • Multiple synchronized camera viewpoints
  • 3D skeleton annotations
  • Diverse activities for assisted living scenarios
  • Benchmark evaluation protocols

RHM-HAR-1: RGB multi-view activity recognition dataset featuring:

  • High-resolution video from multiple angles
  • Natural home environment recordings
  • Comprehensive activity taxonomy
  • Public release for research community

Technical Innovations

  • Advanced AI Architectures: Integrating Transformer-based models for temporal modeling
  • Agentic AI Concepts: Exploring memory-driven and goal-oriented recognition systems
  • Multi-View Fusion: Novel approaches to combining information from multiple camera perspectives
  • Edge Deployment: Optimizing models for real-time performance on resource-constrained devices

Technologies

PyTorch, MMPOSE, YOLOv7, GPU Cluster, Transformer, Agentic AI, Multi-View HAR

Publications

Multiple papers published on dataset design, multi-view recognition, and intelligent system architectures, contributing to the broader HAR research community.

Impact

The RHM-HAR series has become a valuable resource for researchers working on ambient assisted living, elderly care monitoring, and smart home applications.