An intelligent backend featuring auto-evaluation, self-training scoring models, dynamic task curation, and continuous learning path optimization.
Interactive API DocsUser completes versioned question bank. Each answer is AI-evaluated in real-time.
AI derives 5 dimensions: skill_maturity, learning_speed, execution_bias, risk_tolerance, goal_clarity.
Rules engine + AI proposes 3-5 primary tasks based on profile and difficulty mode.
User selects 1 primary + N secondary tasks → 15-min grace period → Deadline starts.
System scores progress. User submits reflection. Combined into TaskSetScore.
Profile updates → Next task options generated → Difficulty adjusts → Repeat forever.
ai_evaluation_service.py
question_quality_service.py
cohort_meta_learning_service.py
difficulty_adjustment_service.py
Every evaluation stored in ai_evaluation_history with question, answer, scores, user
context.
Link evaluations to actual outcomes (task completion, program success, deadline adherence).
update_scoring_model_from_outcomes() recalculates weights based on what actually
worked.
New model version deployed. Old quizzes keep old scoring. Confidence scores track reliability.