ML-Enhanced Vision for detection identification and high speed path finding - BC-988

Genre de projet: Innovation
Discipline(s) souhaitée(s): Génie - informatique / électrique, Génie, Informatique, Sciences mathématiques, Mathématiques
Entreprise: Genist Systems
Durée du projet: Flexible
Date souhaitée de début: Dès que possible
Langue exigée: Anglais
Emplacement(s): Vancouver, BC, Canada
Nombre de postes: 1
Niveau de scolarité désiré: CollègeÉtudes de premier cycle/baccalauréatMaîtriseDoctoratRecherche postdoctoraleNouvelle diplômée/nouveau diplômé
Ouvert aux candidatures de personnes inscrites à un établissement à l’extérieur du Canada: No

Au sujet de l’entreprise: 

Genist Systems is a Canadian technology company developing high-speed, all-weather VTOL (Vertical Take-Off and Landing) drones for emergency response.
Our platform features a high-resolution camera cluster engineered for the long-distance tracking and identification of people and vehicles. We integrate advanced flight pathing with robust flight control systems to deliver critical, life-saving capabilities.
Our core focus is enhanced computer vision and sensor fusion. This enables low-cost, high-speed autonomous flight through adaptive pathing, ensuring reliable operation in complex and adverse environments.

Veuillez décrire le projet.: 

This project develops an integrated multi-sensor vision pipeline for high-speed drones performing search-and-rescue object detection, re-identification, and autonomous path planning.
This project develops an integrated multi-sensor perception for high-speed drones performing long-range detection, re-identification, and navigation toward people or vehicles matching a defined search profile.
1. Dataset Development
Collect annotated aerial data of moving cars and people using RGB, thermal, low-light, and LiDAR/radar sensors. All video and sensor streams are normalized into consistent 8-bit representations with tone-mapping applied for cross-sensor compatibility. Paired global-shutter high-resolution references are recorded to support training for multi-focus super-resolution, rolling-shutter correction, and deblurring.
Later on-drone real-time collection is used for continuous retraining.
2. Model Training
• Low-Light & Thermal Enhancement: Integrate state-of-the-art algorithms such as DarkIR, AnyTSR, and thermal/RGB tone-mapped enhancement models.
• Deblurring & Motion Correction: Train GyroDeblur style networks and rolling-shutter correction models, JAMNet) using global-shutter references.
• Super-Resolution: Use NAFSSR and multi-sensor SR architectures to fuse thermal, RGB, and LiDAR/radar to increase effective detection range.
• Detection & Re-Identification: Train rolling-shutter-aware detectors such as RSDet, and cross-sensor re-ID models such as OSNet for people and vehicle matching.
3. Autonomous Mapping & Planning
Sensor fuse enhanced RGB/thermal features and LiDAR/radar using MAV3D-style pipelines. Use power-line detection, opening identification, and PointCloudTraj path planning to navigate toward detected objects.
4. Deployment & Validation
Integrate all perception modules into a real-time onboard execution stack, validate during high-speed flight, and refine through iterative retraining with newly collected aerial data.

Expertise ou compétences exigées: 

- Quantization, pruning ONNX deployment for real-time inference
Familiarity with
- object detection models, such as those used in this project, including OSNet, YOLO
- hybid vision transformers that combine VIT and Oject detection
- image enhancement models, such as those used in this project, including JAMNet, DarkIR, AnyTSR, NAFSSR
- 3D mapping, such as those used in this project SLAM, MAV3D
- path planning, such as those used in this project, PointCloudTraj
- linux, python, C,