Low-latency video understanding for fine-grained esports kkill assessment: developing computer vision classifiers to measure and improve key FPS gameplay mechanics - BC-978

Project type: Innovation
Desired discipline(s): Engineering - computer / electrical, Engineering, Computer science, Mathematical Sciences
Company: Training Arc
Project Length: 4 to 6 months
Preferred start date: As soon as possible.
Language requirement: English
Location(s): Vancouver, BC, Canada
No. of positions: 1
Desired education level: Master'sPhDPostdoctoral fellow
Open to applicants registered at an institution outside of Canada: No

About the company: 

Training Arc is a Canadian technology startup building AI-powered tools to help competitive gamers improve their performance. Our core product is an automated coaching platform that uses computer vision and machine learning to analyze gameplay footage, diagnose mechanics such as crosshair placement, peeking, and gunfight hygiene, and deliver targeted feedback and training drills.
By applying fine-grained, frame-level video analysis, Training Arc connects advances in temporal video understanding with practical applications in esports. The platform’s mission is to make structured, evidence-based improvement accessible to players everywhere, much like how analytics transformed traditional sports.
Based in Vancouver, British Columbia, Training Arc has developed a minimum viable product and is advancing toward broader rollout.

Describe the project.: 

The innovation lies in advancing temporal video understanding for competitive gaming, with a focus on first-person shooters (FPS). Unlike existing action recognition benchmarks, this project targets the fine-grained mechanics that drive in-game performance.

Project Outcomes
1. Novel Gameplay Classifiers – Models capable of detecting and evaluating subtle FPS mechanics such as crosshair placement, peeking, and gunfight hygiene.
2. Low-Latency Video Analysis Pipeline – Optimized classification models (compression, quantization, sliding-window inference) for near real-time analysis on consumer hardware.
3. Human-Interpretable Performance Metrics – Translation of raw model outputs into actionable coaching feedback for esports players.

Candidate Tasks
1. Model Development – Fine-tune state-of-the-art architectures for FPS-specific mechanics and experiment with temporal strategies for frame-level prediction.
2. Data & Annotation Strategy – Build efficient pipelines for preprocessing, labeling, and validating clips, incorporating active learning or probabilistic labeling to reduce cost and noise.
3. Performance Optimization – Apply quantization, pruning, and ONNX/TensorRT conversion to balance accuracy with real-time deployment.
4. Evaluation & Metrics – Establish benchmarks (AUC, AP, latency, interpretability) and correlate classifier outputs with player outcomes.

Methodology
1. Data Collection & Preprocessing – Extract Valorant gameplay clips; apply object detection for player, weapon, and environment recognition.
2. Model Development – Fine-tune advanced video representation models and test temporal architectures for detecting T2/T3 events.
3. Annotation Strategy – Use semi-automated pipelines with human-in-the-loop validation; implement Gaussian targets and probabilistic annotations; apply active learning for efficiency.
4. Performance Optimization – Research compression methods (quantization, pruning, knowledge distillation) and benchmark inference latency.
5. Evaluation & Validation – Define benchmarks (AUC, Average Precision, throughput) and run correlation studies between model predictions and real-world player improvement.

Required expertise/skills: 

1. Machine Learning & Deep Learning
o Solid understanding of supervised learning, model training, and evaluation.
o Familiarity with class imbalance handling, soft labeling, and loss function design.
2. Computer Vision
o Experience with video analysis, temporal modeling and action recognition.
o Background in object detection and feature extraction.
3. Programming & Frameworks
o Proficiency in Python and deep learning frameworks (PyTorch, TensorFlow).
o Experience with GPU-based training and inference optimization.
4. Data Engineering & Annotation Strategy
o Ability to manage large-scale datasets (video preprocessing, clip extraction).
o Knowledge of annotation tools and techniques for building training datasets.
5. Python for Machine Learning
• Proficiency in Python, including experience with PyTorch or TensorFlow for model training, fine-tuning, and evaluation on video data.
6. Google Cloud Platform (GCP)
• Familiarity with cloud infrastructure for ML workflows, including using GPUs/TPUs for training, managing datasets in cloud storage, and deploying models via services such as Vertex AI or Kubernetes (GKE).

Preferred (Nice-to-Have):
• Familiarity with ONNX/TensorRT or other model deployment/optimization tools.
• Interest in esports, human performance, or sports analytics.
• Prior experience with real-time or low-latency ML applications.