AI-enabled mission readiness & escalation authority for intelligence operations - ON-1179
Project type: ResearchDesired discipline(s): Aerospace studies, Engineering, Engineering - computer / electrical, Computer science, Mathematical Sciences
Company: mLAi Analytics Inc.
Project Length: 6 months to 1 year
Preferred start date: As soon as possible.
Language requirement: English
Location(s): Toronto, ON, Canada
No. of positions: 1
Desired education level: Master'sPhDPostdoctoral fellowRecent graduate
Open to applicants registered at an institution outside of Canada: No
About the company:
mLAi Analytics Inc. is a Canadian AI and systems engineering company focused on mission-critical readiness and decision authority systems for government, defense, and other high-consequence operational environments. The company designs production-grade, audit-ready platforms that integrate heterogeneous data, enforce readiness and safety gates, and support accountable human-in-the-loop decisions under compressed timelines. mLAi Analytics is currently delivering Pixel Phase 2, a multi-year Government of Canada program, where its platform operates as a mission/ data intelligence and readiness system under formal oversight, performing continuous data ingestion, readiness and risk gating, and traceable decision routing at scale.
Describe the project.:
A sovereign AI authority layer that integrates heterogeneous intelligence signals (ISR feeds, analyst inputs, system health, uncertainty indicators) to determine when intelligence products are actionable, require escalation, or must be withheld under operational and political constraints.
What capability is being transferred from Turning Pixels Into Data (Phase 2):
• Pre-decision scoring (complexity / risk → ambiguity / consequence)
• Deterministic routing (fast path vs escalation)
• Human-on-the-loop authority
The inputs are signals, feeds, confidence estimates, analyst judgments, timelines.
Funding context:
• Intelligence overload + analyst bottlenecks
• Demand for explainable escalation authority, not black-box AI
• Direct relevance to DND / Five Eyes partners
Research questions:
• How do you compute operational readiness of intelligence?
• How do you formalize escalation thresholds under uncertainty?
• How do you preserve accountability when AI participates in authority?
Required expertise/skills:
This project requires multidisciplinary expertise spanning AI systems engineering, decision science, and intelligence operations.
Core technical skills
• Applied AI/ML for decision support (not autonomous decision-making)
• Risk, uncertainty, and confidence modeling (Bayesian reasoning, probabilistic scoring, ambiguity quantification)
• Deterministic routing and rules-based orchestration layered over ML signals
• Systems integration across heterogeneous data feeds (ISR, telemetry, analyst inputs, timelines)
• Explainable AI (XAI) for traceability, auditability, and justification of escalation decisions
Decision & governance expertise
• Human-on-the-loop system design with clear authority handoffs
• Operational readiness assessment frameworks
• Escalation threshold design under uncertainty and time pressure
• Accountability-preserving AI architectures (decision logs, rationale capture, override mechanisms)
Domain & operational knowledge
• Intelligence analysis workflows and analyst bottlenecks
• ISR pipelines and confidence estimation
• Sovereign AI constraints, security, and policy compliance
• Familiarity with defence, intelligence, or Five Eyes operational contexts

