Safety Modeling for Autonomous AI Systems - ON-1180
Project type: ResearchDesired discipline(s): Engineering - computer / electrical, Engineering, Computer science, Mathematical Sciences, Mathematics
Company: El Ghandour Research Labs Inc.
Project Length: 4 to 6 months
Preferred start date: As soon as possible.
Language requirement: English
Location(s): Hamilton, ON, Canada
No. of positions: 1
Desired education level: Master'sPhD
Open to applicants registered at an institution outside of Canada: No
About the company:
El Ghandour Research Labs Inc. is a Canadian scientific research organization focused on autonomous systems safety. Our work develops mathematical and engineering safety guarantees for artificial intelligence, robotics, and autonomous vehicle technologies.
We operate at the foundational research layer through what we call “Deep Science”, interdisciplinary research that identifies stability principles underlying complex intelligent systems regardless of domain. Our objective is to ensure that next-generation autonomous technologies remain safe, predictable, and controllable as they scale.
Our research addresses one of the most critical technological challenges of our time: how to design autonomous systems that remain aligned, stable, and fail-safe in real-time deployment environments.
Describe the project.:
This research project focuses on developing mathematical and computational safety frameworks for autonomous systems, including artificial intelligence agents, robotics platforms, and autonomous vehicle technologies.
The primary objective is to design verifiable stability and control models that ensure intelligent systems operate within safe behavioral and operational boundaries. As autonomous technologies scale, ensuring predictability, alignment, and fail-safe operation becomes critical for real-world deployment.
Key research areas include:
• Modeling system instability and edge-case failure scenarios
• Designing constraint and control architectures for learning systems
• Developing simulation environments to stress-test autonomous agents
• Creating real-time safety monitoring and intervention frameworks
The research intern will work on interdisciplinary methodologies combining machine learning, control theory, and applied mathematics. Techniques may include probabilistic modeling, reinforcement learning safety constraints, dynamical systems analysis, and simulation-based validation.
Project outcomes will include technical prototypes, research publications, and intellectual property contributing to next-generation AI safety infrastructure. The work supports the broader mission of enabling safe and stable deployment of autonomous technologies across industries.
Required expertise/skills:
The ideal candidate will have an academic background in artificial intelligence, computer science, robotics, or applied mathematics.
Required skills include:
• Machine learning and AI model development
• Strong Python programming experience
• Mathematical modeling and statistical analysis
• Experience with simulation environments (e.g., MATLAB, ROS, or similar)
• Understanding of control systems or dynamical systems
Preferred Assets:
• Reinforcement learning or deep learning research
• Autonomous systems or robotics experience
• AI safety, validation, or testing frameworks
• Data analysis and visualization tools
• Academic research and technical writing experience
The candidate should be comfortable working in interdisciplinary research environments and translating theoretical frameworks into applied system prototypes.

