Propirate.
NEURAL_ENGINE_v6.0

Self-Evolving
AI Mentoring System

An intelligent backend featuring auto-evaluation, self-training scoring models, dynamic task curation, and continuous learning path optimization.

Interactive API Docs

🏗️ System Architecture

API Layer
FastAPI Routes WebSocket Handlers Auth Middleware
Service Layer (21 Services)
AI Evaluation Profiling Task Curation Auto-Evaluation Question Evolution Difficulty Adjust
Intelligence Layer
Scoring Models Meta-Learning Cohort Analysis Weight Updates
Data Layer
MongoDB Atlas 16 Repositories Evaluation History

🔄 End-to-End Learning Flow

1
Diagnostic Quiz

User completes versioned question bank. Each answer is AI-evaluated in real-time.

2
Profile Construction

AI derives 5 dimensions: skill_maturity, learning_speed, execution_bias, risk_tolerance, goal_clarity.

3
Track Selection

Rules engine + AI proposes 3-5 primary tasks based on profile and difficulty mode.

4
Task Set Creation

User selects 1 primary + N secondary tasks → 15-min grace period → Deadline starts.

5
Auto + Self Evaluation

System scores progress. User submits reflection. Combined into TaskSetScore.

Continuous Loop

Profile updates → Next task options generated → Difficulty adjusts → Repeat forever.

🧠 Self-Training AI System

AI Evaluation Service
ai_evaluation_service.py
  • Context-aware answer scoring
  • User history integration
  • Stores evaluations for learning
  • 34KB of intelligence logic
Question Evolution
question_quality_service.py
  • Analyzes predictive power
  • Auto-deprecates weak questions
  • Generates new questions from patterns
  • Weekly evolution jobs
Weight Management
cohort_meta_learning_service.py
  • Tracks outcome correlations
  • Updates scoring model weights
  • A/B tests different models
  • Confidence scoring per model
Difficulty Adjustment
difficulty_adjustment_service.py
  • HARD → HARDER → HARDEST modes
  • Pace multipliers from history
  • Auto-downgrade on struggles
  • Deadline calibration

⚖️ How Weights Self-Update

📊
Data Collection

Every evaluation stored in ai_evaluation_history with question, answer, scores, user context.

🔗
Outcome Linking

Link evaluations to actual outcomes (task completion, program success, deadline adherence).

🔄
Model Retraining

update_scoring_model_from_outcomes() recalculates weights based on what actually worked.

📈
Versioned Deploy

New model version deployed. Old quizzes keep old scoring. Confidence scores track reliability.

Live Neural Matrix
Scanning endpoints...

⏰ Background Scheduled Tasks

Every 30s Realtime Deadline Broadcast
Daily 2AM UTC Question Quality Update
Daily 3AM UTC Journey Stats Aggregation
Sunday 4AM UTC Weekly Question Evolution
Sunday 5AM UTC Outcome Correlation Update