LONDE Tristan

๐Ÿ‘‹ Hello!

Iโ€™m an Artificial Intelligence engineering student at ENIB (ร‰cole Nationale dโ€™Ingรฉnieurs de Brest), passionate about signal processing, computer vision, and building intelligent systems.

In my free time, I design and develop projects in various fields that interest me (AI, Data Science, software Engineering,โ€ฆ)

For example I have created a fatigue detector (FatiguEye) and AI powered application that recognizes emotions in the human speech (EmotionAI-Voice). I am currently developping an AI local assistant (Numa) and a reverse engineering AI tool for Printed Circuit Boards (PCB) (BoardMapper) with Florian Pasco.

On my GitHub you will find various projects using various technologies, probably mostly Data Science and Machine Learning but not only , as I am curious by nature and always looking forward to learning new technologies and acquiring new skills.

๐ŸŒ Interests

  • ๐Ÿ’ป AI & Machine Learning: Real-time models, voice AI, local-first systems
  • ๐ŸŽค Speech Synthesis: Custom voice assistants, voice-to-voice pipelines
  • ๐Ÿ‘๏ธ Computer Vision: Fatigue detection, facial analysis, eye tracking
  • ๐Ÿ›ก๏ธ Cybersecurity: Deepfake detection, audio anti-spoofing
  • And a lot more !

EmotionAI-voice

An AI-powered application for detecting human emotions

EmotionAI Voice is an open-source deep learning project that classifies vocal emotions using raw .wav audio.
Itโ€™s designed for applications in mental health monitoring, UX analysis, and intelligent speech interfaces.

๐Ÿ”ฌ The model is trained from scratch, using spectrogram-based audio features, and aims to recognize 8 core emotions.


๐ŸŽฏ Features

  • ๐Ÿง  Emotion recognition: neutral, calm, happy, sad, angry, fearful, disgust, surprised
  • ๐ŸŽง Accepts .wav audio inputs (from RAVDESS dataset)
  • ๐Ÿ“Š CNN and CNN+GRU models implemented in PyTorch
  • ๐Ÿ” Real-time evaluation with confusion matrix and accuracy tracking
  • ๐Ÿ› ๏ธ Fully open-source and customizable (no pre-trained models)
  • ๐Ÿงช Includes SpecAugment for data augmentation (frequency/time masking)

๐Ÿ“š Dataset โ€” RAVDESS

We use the RAVDESS dataset, which includes:

  • ๐ŸŽญ 24 professional actors (balanced male/female)
  • ๐ŸŽ™๏ธ 1440 .wav files (16-bit, 48kHz)
  • 8 labeled emotions:
    neutral, calm, happy, sad, angry, fearful, disgust, surprised

Each .wav file is preprocessed into a Mel spectrogram and stored as .npy format.


๐Ÿง  Model Architectures

2 different models

โœ… CNN (Best Performance)

  • 3x Conv1D + ReLU + MaxPool
  • Fully connected layers
  • Dropout regularization (adjustable)

๐Ÿ” CNN + GRU

  • CNN front-end for spatial encoding
  • GRU (recurrent layers) to capture temporal dynamics
  • Lower accuracy than CNN-only model

๐Ÿงช SpecAugment: Data Augmentation

To improve generalization, we implemented SpecAugmentTransform which applies:

  • ๐Ÿ•’ Time masking: hides random time intervals
  • ๐Ÿ“ก Frequency masking: hides random mel frequency bands

๐Ÿ“ˆ Training Results

  • Best Validation Accuracy: ~49.6%
  • Training set: Actors 1โ€“20
  • Validation set: Actors 21โ€“24

Confusion Matrix Example:

ConfusionMatrix

๐Ÿ” Key Observations:

  • Surprised, calm, and disgust are the most accurately predicted emotions.
  • Neutral, happy, and sad tend to be confused with each other, which is common due to subtle acoustic variations.
  • The model struggles with fearful and angry in some cases โ€” suggesting those may share overlapping vocal characteristics in this dataset.
  • Emotion classes like happy and fearful are often misclassified due to variability in expression intensity among different actors.
๐Ÿ“ˆ Interpretation

While the model captures general emotion cues, it suffers from class overlap and limited generalization. The accuracy remains significantly above random (12.5% for 8 classes), but there is still room for improvement.


๐Ÿš€ Getting Started

1. Install dependencies

pip install -r requirements.txt

2. Download dataset from Kaggle

Follow the instructions in the README.md located in the data folder

3 . Train the model

python src/train.py

4. Evaluation the performances with a confusion matrix

```bash python src/confusion_matrix.py

FatiguEye

Detection of fatigue using a webcam

A smart computer vision system that detects signs of fatigue, eye strain, and microsleep using a standard webcam.


๐ŸŽฏ Purpose

FatiguEye is a real-time fatigue detection system based on eye tracking and facial landmark analysis.
It helps identify early signs of drowsiness by measuring:

  • ๐Ÿ‘๏ธ Eye Aspect Ratio (EAR)
  • ๐Ÿ” Blink frequency
  • โฑ๏ธ Prolonged eyelid closure
  • โš ๏ธ Microsleep events

Ideal for driver monitoring, industrial safety, or ergonomic fatigue prevention.


๐Ÿง  How It Works

FatiguEye uses MediaPipe Face Mesh to extract eye landmarks, and computes the EAR (Eye Aspect Ratio) on each video frame.

๐Ÿ“ก Processing pipeline:

  1. ๐ŸŽฅ Webcam feed is captured in real-time
  2. ๐Ÿง  Facial landmarks (eyes) are detected with Mediapipe
  3. ๐Ÿ“ EAR is calculated per eye
  4. ๐Ÿงฎ Blink count and eye closure duration are analyzed
  5. ๐Ÿ”” Fatigue alerts are raised (visual + audio)

๐Ÿš€ Demo Preview

FatiguEye demo


๐Ÿ’ป Technologies Used

Tech Description
Python Core language
OpenCV Webcam video processing + overlays
MediaPipe Face mesh & eye landmark detection
NumPy EAR computation
Streamlit Live web dashboard
winsound Audio alert (Windows only)

๐Ÿ“ฆ Installation

```bash git clone https://github.com/Tirovo/fatigueye.git cd fatigueye python -m venv venv source venv/bin/activate # Or venv\Scriptsctivate on Windows

BoardMapper

๐Ÿ› ๏ธ PCB placement map generator

BoardMapper is an open-source tool designed to automatically generate PCB layout. It labels component references (e.g. U1, R1, C1) directly on the circuit image, facilitating component identification for reverse engineering purposes.

๐ŸŽฏ Purpose

  • ๐Ÿค– Automation: Eliminates the need for manual placement annotation on PCB layouts.
  • โฑ๏ธ Efficiency: Saves time for engineers and makers working on PCB assembly and debugging.
  • ๐Ÿ” Clarity: Provides a clear visual reference for debugging, testing, and manufacturing.
  • ๐Ÿ’ป Cross-Platform: Works on Windows, Linux, and macOS systems.

๐Ÿ“ Annotation

Position Original Annotated
Top
Bottom

๐Ÿ“‹ Requirements

  • Python: Version 3.6 or higher
  • Required Libraries:
    • opencv-python (for image processing)
    • lxml (for XML parsing)

๐Ÿš€ Installation Instructions

๐Ÿ› ๏ธ Setup

  1. Clone the repository or Download the project to your local machine.

  2. Labeling the PCB:
    • Step 1: Take a photo of both the top and bottom layers of the chosen PCB.
    • Step 2: Place the top.png and bottom.png images into the input folder.
    • Step 3: Install the latest version of LabelImg.
    • Step 4: Open top.png in LabelImg and draw bounding boxes around each component. Label each component according to its type:
      • R: Resistor
      • C: Capacitor
      • L: Inductor
      • F: Fuse
      • POT: Potentiometer
      • D: Diode
      • LED: LED
      • Q: Transistor (BJT, MOSFET)
      • U: Integrated Circuit (IC)
      • J: Connector
      • K: Relay
      • SW: Switch
      • Y: Quartz / Resonator
      • SP: Speaker
      • ANT: Antenna

      LabelImg Shortcuts:

      • โœ๏ธ W: Draw a new rectangular bounding box (RectBox)
      • โŒ D: Delete the last drawn bounding box
      • ๐Ÿ’พ Ctrl + S: Save the annotation as an XML file
      • โช Ctrl + Z: Undo the last action
      • ๐Ÿ“‹ Ctrl + C: Copy a bounding box
      • ๐Ÿ“ Ctrl + V: Paste a copied bounding box
      • ๐Ÿ” Ctrl + A: Select all bounding boxes
      • ๐Ÿ”„ Ctrl + R: Rotate the image (for better labeling)
      • ๐Ÿšซ Esc: Cancel the current operation or close a dialog box
    • Step 5: After labeling the top.png, save the annotation as top.xml.
    • Step 6: Repeat the labeling process for the bottom.png and save it as bottom.xml.
    • Step 7: Place both top.xml and bottom.xml into the input folder.
  3. Running the Tool:
    • Windows: Double-click on setup_and_run.bat to automatically run the script. The tool will read the XML annotations, draw bounding boxes on the images, and save the annotated images.
    • Linux/macOS: You can run the script from the terminal:
      chmod +x script.sh
      ./script.sh
      
  4. ๐Ÿ“‚ Output:
    • After the script has executed, navigate to the output folder to find the resulting annotated images:
      • top_annotated.png
      • bottom_annotated.png

๐Ÿค Contributions

If youโ€™d like to contribute to the project, please follow these steps:

  1. Fork the repository
  2. Create a feature branch
  3. Commit your changes
  4. Push to the branch
  5. Open a pull request

We welcome any contributions to improve BoardMapper! ๐ŸŽ‰

VATN-WaterGame

๐Ÿ’ง A strategic game on water management!

VATN (from the Old Norse word for โ€œwaterโ€) is a game designed to raise awareness about water management. Players must make critical daily decisions to address major water-related issues in their country, influencing key parameters that determine the nationโ€™s survival.

๐ŸŽฏ Purpose

  • ๐Ÿ’ง Water Management Awareness: Educate players on the importance of sustainable water policies.
  • ๐ŸŽฎ Engaging Decision-Making: Every day presents a new challenge that affects the nationโ€™s status.
  • ๐Ÿ† Survival & Strategy: Keep your country alive as long as possible by maintaining stability.
  • ๐Ÿ“Š Progress Tracking: Visualize the evolution of key parameters through in-game graphs.

๐Ÿ“ Features

๐Ÿท๏ธ Feature ๐Ÿ” Description
๐ŸŒŽ Game Type Strategic Decision-Making Simulation
๐Ÿ“… Daily Choices Players make decisions each day affecting country parameters
๐Ÿ“‰ Dynamic Statistics Key indicators fluctuate based on player actions
๐Ÿ’€ Game Over The country collapses if the population reaches 0
๐Ÿ“‚ Save & Load Resume previous games using saved files
๐Ÿ… Rankings Compare results with previous local games
๐Ÿ“Š Graphical Summary Track country status evolution over time

๐Ÿ“บ Gameplay Overview

๐ŸŽฎ Main Menu ๐Ÿ“Š In-Game Statistics ๐Ÿ† Endgame Rankings

AntiVuvuzelaFilter

๐ŸŽต Noise filter for clear audio ๐ŸŽ™๏ธ

Anti-Vuvuzela Filter is an open-source project dedicated to second-order analog filters and beyond. This project was initially developed to design an โ€œanti-vuvuzelaโ€ filter, aiming to attenuate the distinctive and persistent sound of vuvuzelas while preserving the clarity of commentatorsโ€™ voices during the 2010 FIFA World Cup.

๐ŸŽฏ Purpose

  • ๐Ÿ”‡ Targeted Noise Reduction: Specifically designed to attenuate vuvuzela noise while maintaining the intelligibility of speech.
  • ๐ŸŽš Second-Order Analog Filtering: Utilizing advanced filtering techniques for efficient noise cancellation.
  • ๐Ÿ› ๏ธ Open-source and Customizable: Modify and adapt the design for other audio filtering applications.

๐Ÿ“ Features

๐Ÿท๏ธ Feature ๐Ÿ” Description
๐ŸŽผ Filter Type Second-order analog filter
๐ŸŽฏ Target Frequency 233 Hz (typical vuvuzela frequency)
๐ŸŽ™ Voice Preservation Maintains speech clarity
๐Ÿ”ง Components Resistors, capacitors, and operational amplifiers
๐Ÿ–ฅ๏ธ Simulation Tools Jupyter Notebook, LTSpice
๐Ÿ›  Real-world Testing Assembled and tested in real conditions
๐Ÿ”Œ Input Analog audio signal
๐Ÿ”Š Output Cleaned audio signal with reduced vuvuzela noise
๐ŸŒ Use Cases Audio signal processing, speech enhancement, noise reduction

๐Ÿ“ Simulation & Testing

๐Ÿ› ๏ธ LTSpice Circuit ๐Ÿ“œ Simulation

MotorControlShield

๐Ÿ”„ Arduino shield for single DC motor control

The Motor Control Shield is an open-source project designed for controlling DC motors. It comes in the form of an Arduino shield mounted on an STM32 Nucleo board. The shield enables motor control via an NMOS transistor, current measurement with a shunt resistor, and rotation tracking using data from an incremental encoder.

๐ŸŽฏ Purpose

  • ๐Ÿ”„ Motor Control: Provides precise control over DC motors.
  • ๐Ÿ“‰ Current Measurement: Monitors the current consumed by the motor.
  • ๐Ÿ”„ Rotation Tracking: Uses an incremental encoder to track motor rotation.
  • ๐Ÿ› ๏ธ Open-source & Customizable: Modifiable and adaptable for various projects.

๐Ÿ“ Features

๐Ÿท๏ธ Feature ๐Ÿ” Description
๐Ÿ”„ Motor Control Uses an NMOS transistor to control motor speed and direction
๐Ÿ“‰ Current Measurement Shunt resistor for measuring the current consumed by the motor
๐Ÿ”„ Rotation Tracking Incremental encoder to track motor position and rotation speed
๐Ÿ”˜ Compatibility Arduino shield compatible with STM32 Nucleo boards
๐Ÿ–ฅ๏ธ PCB Design Open-source and customizable
๐ŸŒ Use Cases robots, embedded systems, and motor control applications

๐Ÿ“ PCB Design Preview

๐Ÿ“œ Functional diagram ๐Ÿ“œ Schematic ๐Ÿ–ฅ๏ธ PCB Layout ๐Ÿ—๏ธ 3D
Schematic Schematic PCB Layout 3D