Merge pull request #1 from TDJX/agent_lifecycle_intergration

Agent lifecycle
This commit is contained in:
Михаил Краевский 2025-09-05 00:32:30 +03:00 committed by GitHub
commit 8fa727333a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
73 changed files with 5864 additions and 4660 deletions

152
CLAUDE.md Normal file
View File

@ -0,0 +1,152 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Common Development Commands
### Application Startup
```bash
# Start FastAPI server
uvicorn app.main:app --reload --port 8000
# Start Celery worker (required for resume processing)
celery -A celery_worker.celery_app worker --loglevel=info
# Start LiveKit server (for voice interviews)
docker run --rm -p 7880:7880 -p 7881:7881 livekit/livekit-server --dev
```
### Database Management
```bash
# Run database migrations
alembic upgrade head
# Create new migration
alembic revision --autogenerate -m "description"
```
### Code Quality
```bash
# Format code and fix imports
ruff format .
# Lint and auto-fix issues
ruff check . --fix
# Type checking
mypy .
```
### Testing
```bash
# Run basic system tests
python simple_test.py
# Run comprehensive tests
python test_system.py
# Test agent integration
python test_agent_integration.py
# Run pytest suite
pytest
```
## Architecture Overview
### Core Components
**FastAPI Application** (`app/`):
- `main.py`: Application entry point with middleware and router configuration
- `routers/`: API endpoints organized by domain (resume, interview, vacancy, admin)
- `models/`: SQLModel database schemas with enums and relationships
- `services/`: Business logic layer handling complex operations
- `repositories/`: Data access layer using SQLModel/SQLAlchemy
**Background Processing** (`celery_worker/`):
- `celery_app.py`: Celery configuration with Redis backend
- `tasks.py`: Asynchronous tasks for resume parsing and interview analysis
- `interview_analysis_task.py`: Specialized task for processing interview results
**AI Interview System**:
- `ai_interviewer_agent.py`: LiveKit-based voice interview agent using OpenAI, Deepgram, and Cartesia
- `app/services/agent_manager.py`: Singleton manager for controlling the AI agent lifecycle
- Agent runs as a single process, handling one interview at a time (hackathon limitation)
- Inter-process communication via JSON command files
- Automatic startup/shutdown with FastAPI application lifecycle
**RAG System** (`rag/`):
- `vector_store.py`: Milvus vector database integration for resume search
- `llm/model.py`: OpenAI GPT integration for resume parsing and interview plan generation
- `service/model.py`: RAG service orchestration
### Database Schema
**Key Models**:
- `Resume`: Candidate resumes with parsing status, interview plans, and file storage
- `InterviewSession`: LiveKit rooms with AI agent process tracking
- `Vacancy`: Job postings with requirements and descriptions
- `Session`: User session management with cookie-based tracking
**Status Enums**:
- `ResumeStatus`: pending → parsing → parsed → interview_scheduled → interviewed
- `InterviewStatus`: created → active → completed/failed
### External Dependencies
**Required Services**:
- PostgreSQL: Primary database with asyncpg driver
- Redis: Celery broker and caching layer
- Milvus: Vector database for semantic search (optional, has fallbacks)
- S3-compatible storage: Resume file storage
**API Keys**:
- OpenAI: Required for resume parsing and LLM operations
- Deepgram/Cartesia/ElevenLabs: Optional voice services (has fallbacks)
- LiveKit credentials: For interview functionality
## Development Workflow
### Resume Processing Flow
1. File upload via `/api/v1/resume/upload`
2. Celery task processes file and extracts text
3. OpenAI parses resume data and generates interview plan
4. Vector embeddings stored in Milvus for search
5. Status updates tracked through enum progression
### Interview System Flow
1. AI agent starts automatically with FastAPI application
2. Validate resume readiness via `/api/v1/interview/{id}/validate`
3. Check agent availability (singleton, one interview at a time)
4. Generate LiveKit token via `/api/v1/interview/{id}/token`
5. Assign interview session to agent via command files
6. Conduct real-time voice interview through LiveKit
7. Agent monitors for end commands or natural completion
8. Session cleanup and agent returns to idle state
### Configuration Management
- Settings via `app/core/config.py` with Pydantic BaseSettings
- Environment variables loaded from `.env` file (see `.env.example`)
- Database URLs and API keys configured per environment
## Important Notes
- AI agent runs as a singleton process, handling one interview at a time
- Agent lifecycle is managed automatically with FastAPI startup/shutdown
- Interview sessions require LiveKit server to be running
- Agent communication happens via JSON files (agent_commands.json, session_metadata_*.json)
- Resume parsing is asynchronous and status should be checked via polling
- Vector search gracefully degrades if Milvus is unavailable
- Session management uses custom middleware with cookie-based tracking
## Agent Management API
```bash
# Check agent status
GET /api/v1/admin/agent/status
# Start/stop/restart agent manually
POST /api/v1/admin/agent/start
POST /api/v1/admin/agent/stop
POST /api/v1/admin/agent/restart
```

View File

@ -1,36 +1,29 @@
# -*- coding: utf-8 -*-
import asyncio import asyncio
import json import json
import logging import logging
import os import os
from typing import Dict, List, Optional
from datetime import datetime from datetime import datetime
# Принудительно устанавливаем UTF-8 для Windows # Принудительно устанавливаем UTF-8 для Windows
if os.name == 'nt': # Windows if os.name == "nt": # Windows
import sys import sys
if hasattr(sys, 'stdout') and hasattr(sys.stdout, 'reconfigure'):
sys.stdout.reconfigure(encoding='utf-8', errors='replace')
sys.stderr.reconfigure(encoding='utf-8', errors='replace')
# Устанавливаем переменную окружения для Python
os.environ.setdefault('PYTHONIOENCODING', 'utf-8')
from livekit.agents import ( if hasattr(sys, "stdout") and hasattr(sys.stdout, "reconfigure"):
Agent, sys.stdout.reconfigure(encoding="utf-8", errors="replace")
AgentSession, sys.stderr.reconfigure(encoding="utf-8", errors="replace")
JobContext,
WorkerOptions, # Устанавливаем переменную окружения для Python
cli, os.environ.setdefault("PYTHONIOENCODING", "utf-8")
NotGiven
) from livekit.agents import Agent, AgentSession, JobContext, WorkerOptions, cli
from livekit.plugins import openai, deepgram, cartesia, silero from livekit.api import DeleteRoomRequest, LiveKitAPI
from livekit.api import LiveKitAPI, DeleteRoomRequest from livekit.plugins import cartesia, deepgram, openai, silero
from rag.settings import settings
from app.core.database import get_session from app.core.database import get_session
from app.repositories.interview_repository import InterviewRepository from app.repositories.interview_repository import InterviewRepository
from app.repositories.resume_repository import ResumeRepository from app.repositories.resume_repository import ResumeRepository
from app.services.interview_finalization_service import InterviewFinalizationService from app.services.interview_finalization_service import InterviewFinalizationService
from rag.settings import settings
logger = logging.getLogger("ai-interviewer") logger = logging.getLogger("ai-interviewer")
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
@ -39,12 +32,14 @@ logger.setLevel(logging.INFO)
async def close_room(room_name: str): async def close_room(room_name: str):
"""Закрывает LiveKit комнату полностью (отключает всех участников)""" """Закрывает LiveKit комнату полностью (отключает всех участников)"""
try: try:
api = LiveKitAPI(settings.livekit_url, settings.livekit_api_key, settings.livekit_api_secret) api = LiveKitAPI(
settings.livekit_url, settings.livekit_api_key, settings.livekit_api_secret
)
# Создаем RoomService для управления комнатами # Создаем RoomService для управления комнатами
await api.room.delete_room(delete=DeleteRoomRequest(room=room_name)) await api.room.delete_room(delete=DeleteRoomRequest(room=room_name))
logger.info(f"[ROOM_MANAGEMENT] Room {room_name} deleted successfully") logger.info(f"[ROOM_MANAGEMENT] Room {room_name} deleted successfully")
except Exception as e: except Exception as e:
logger.error(f"[ROOM_MANAGEMENT] Failed to delete room {room_name}: {str(e)}") logger.error(f"[ROOM_MANAGEMENT] Failed to delete room {room_name}: {str(e)}")
raise raise
@ -53,7 +48,7 @@ async def close_room(room_name: str):
class InterviewAgent: class InterviewAgent:
"""AI Agent для проведения собеседований с управлением диалогом""" """AI Agent для проведения собеседований с управлением диалогом"""
def __init__(self, interview_plan: Dict): def __init__(self, interview_plan: dict):
self.interview_plan = interview_plan self.interview_plan = interview_plan
self.conversation_history = [] self.conversation_history = []
@ -66,18 +61,25 @@ class InterviewAgent:
self.last_user_response = None self.last_user_response = None
self.intro_done = False # Новый флаг — произнесено ли приветствие self.intro_done = False # Новый флаг — произнесено ли приветствие
self.interview_finalized = False # Флаг завершения интервью self.interview_finalized = False # Флаг завершения интервью
# Трекинг времени интервью # Трекинг времени интервью
import time import time
self.interview_start_time = time.time()
self.duration_minutes = interview_plan.get('interview_structure', {}).get('duration_minutes', 10)
self.sections = self.interview_plan.get('interview_structure', {}).get('sections', [])
self.total_sections = len(self.sections)
logger.info(f"[TIME] Interview started at {time.strftime('%H:%M:%S')}, duration: {self.duration_minutes} min")
def get_current_section(self) -> Dict: self.interview_start_time = time.time()
self.duration_minutes = interview_plan.get("interview_structure", {}).get(
"duration_minutes", 10
)
self.sections = self.interview_plan.get("interview_structure", {}).get(
"sections", []
)
self.total_sections = len(self.sections)
logger.info(
f"[TIME] Interview started at {time.strftime('%H:%M:%S')}, duration: {self.duration_minutes} min"
)
def get_current_section(self) -> dict:
"""Получить текущую секцию интервью""" """Получить текущую секцию интервью"""
if self.current_section < len(self.sections): if self.current_section < len(self.sections):
return self.sections[self.current_section] return self.sections[self.current_section]
@ -86,7 +88,7 @@ class InterviewAgent:
def get_next_question(self) -> str: def get_next_question(self) -> str:
"""Получить следующий вопрос""" """Получить следующий вопрос"""
section = self.get_current_section() section = self.get_current_section()
questions = section.get('questions', []) questions = section.get("questions", [])
if self.current_question_in_section < len(questions): if self.current_question_in_section < len(questions):
return questions[self.current_question_in_section] return questions[self.current_question_in_section]
return None return None
@ -97,7 +99,7 @@ class InterviewAgent:
self.questions_asked_total += 1 self.questions_asked_total += 1
section = self.get_current_section() section = self.get_current_section()
if self.current_question_in_section >= len(section.get('questions', [])): if self.current_question_in_section >= len(section.get("questions", [])):
self.move_to_next_section() self.move_to_next_section()
def move_to_next_section(self): def move_to_next_section(self):
@ -105,7 +107,9 @@ class InterviewAgent:
self.current_section += 1 self.current_section += 1
self.current_question_in_section = 0 self.current_question_in_section = 0
if self.current_section < len(self.sections): if self.current_section < len(self.sections):
logger.info(f"Переход к секции: {self.sections[self.current_section].get('name', 'Unnamed')}") logger.info(
f"Переход к секции: {self.sections[self.current_section].get('name', 'Unnamed')}"
)
def is_interview_complete(self) -> bool: def is_interview_complete(self) -> bool:
"""Интервью завершается только по решению LLM через ключевые фразы""" """Интервью завершается только по решению LLM через ключевые фразы"""
@ -113,39 +117,42 @@ class InterviewAgent:
def get_system_instructions(self) -> str: def get_system_instructions(self) -> str:
"""Системные инструкции для AI агента с ключевыми фразами для завершения""" """Системные инструкции для AI агента с ключевыми фразами для завершения"""
candidate_info = self.interview_plan.get('candidate_info', {}) candidate_info = self.interview_plan.get("candidate_info", {})
interview_structure = self.interview_plan.get('interview_structure', {}) interview_structure = self.interview_plan.get("interview_structure", {})
greeting = interview_structure.get('greeting', 'Привет! Готов к интервью?') greeting = interview_structure.get("greeting", "Привет! Готов к интервью?")
focus_areas = self.interview_plan.get('focus_areas', []) focus_areas = self.interview_plan.get("focus_areas", [])
key_evaluation_points = self.interview_plan.get('key_evaluation_points', []) key_evaluation_points = self.interview_plan.get("key_evaluation_points", [])
# Вычисляем текущее время интервью # Вычисляем текущее время интервью
import time import time
elapsed_minutes = (time.time() - self.interview_start_time) / 60 elapsed_minutes = (time.time() - self.interview_start_time) / 60
remaining_minutes = max(0, self.duration_minutes - elapsed_minutes) remaining_minutes = max(0, self.duration_minutes - elapsed_minutes)
time_percentage = min(100, (elapsed_minutes / self.duration_minutes) * 100) time_percentage = min(100, (elapsed_minutes / self.duration_minutes) * 100)
# Формируем план интервью для агента # Формируем план интервью для агента
sections_info = "\n".join([ sections_info = "\n".join(
f"- {section.get('name', 'Секция')}: {', '.join(section.get('questions', []))}" [
for section in self.sections f"- {section.get('name', 'Секция')}: {', '.join(section.get('questions', []))}"
]) for section in self.sections
]
)
# Безопасно формируем строки для избежания конфликтов с кавычками # Безопасно формируем строки для избежания конфликтов с кавычками
candidate_name = candidate_info.get('name', 'Кандидат') candidate_name = candidate_info.get("name", "Кандидат")
candidate_years = candidate_info.get('total_years', 0) candidate_years = candidate_info.get("total_years", 0)
candidate_skills = ', '.join(candidate_info.get('skills', [])) candidate_skills = ", ".join(candidate_info.get("skills", []))
focus_areas_str = ', '.join(focus_areas) focus_areas_str = ", ".join(focus_areas)
evaluation_points_str = ', '.join(key_evaluation_points) evaluation_points_str = ", ".join(key_evaluation_points)
# Статус времени # Статус времени
if time_percentage > 90: if time_percentage > 90:
time_status = 'СРОЧНО ЗАВЕРШАТЬ' time_status = "СРОЧНО ЗАВЕРШАТЬ"
elif time_percentage > 75: elif time_percentage > 75:
time_status = 'ВРЕМЯ ЗАКАНЧИВАЕТСЯ' time_status = "ВРЕМЯ ЗАКАНЧИВАЕТСЯ"
else: else:
time_status = 'НОРМАЛЬНО' time_status = "НОРМАЛЬНО"
return f"""Ты опытный HR-интервьюер, который проводит адаптивное голосовое собеседование. return f"""Ты опытный HR-интервьюер, который проводит адаптивное голосовое собеседование.
ИНФОРМАЦИЯ О КАНДИДАТЕ: ИНФОРМАЦИЯ О КАНДИДАТЕ:
@ -195,33 +202,34 @@ class InterviewAgent:
СТИЛЬ: Дружелюбный, профессиональный, заинтересованный в кандидате. СТИЛЬ: Дружелюбный, профессиональный, заинтересованный в кандидате.
""" """
def get_time_info(self) -> Dict[str, float]: def get_time_info(self) -> dict[str, float]:
"""Получает информацию о времени интервью""" """Получает информацию о времени интервью"""
import time import time
elapsed_minutes = (time.time() - self.interview_start_time) / 60 elapsed_minutes = (time.time() - self.interview_start_time) / 60
remaining_minutes = max(0.0, self.duration_minutes - elapsed_minutes) remaining_minutes = max(0.0, self.duration_minutes - elapsed_minutes)
time_percentage = min(100.0, (elapsed_minutes / self.duration_minutes) * 100) time_percentage = min(100.0, (elapsed_minutes / self.duration_minutes) * 100)
return { return {
"elapsed_minutes": elapsed_minutes, "elapsed_minutes": elapsed_minutes,
"remaining_minutes": remaining_minutes, "remaining_minutes": remaining_minutes,
"time_percentage": time_percentage, "time_percentage": time_percentage,
"duration_minutes": self.duration_minutes "duration_minutes": self.duration_minutes,
} }
async def track_interview_progress(self, user_response: str) -> Dict[str, any]: async def track_interview_progress(self, user_response: str) -> dict[str, any]:
"""Трекает прогресс интервью для логирования""" """Трекает прогресс интервью для логирования"""
current_section = self.get_current_section() current_section = self.get_current_section()
time_info = self.get_time_info() time_info = self.get_time_info()
return { return {
"section": current_section.get('name', 'Unknown'), "section": current_section.get("name", "Unknown"),
"questions_asked": self.questions_asked_total, "questions_asked": self.questions_asked_total,
"section_progress": f"{self.current_section + 1}/{len(self.sections)}", "section_progress": f"{self.current_section + 1}/{len(self.sections)}",
"user_response_length": len(user_response), "user_response_length": len(user_response),
"elapsed_minutes": f"{time_info['elapsed_minutes']:.1f}", "elapsed_minutes": f"{time_info['elapsed_minutes']:.1f}",
"remaining_minutes": f"{time_info['remaining_minutes']:.1f}", "remaining_minutes": f"{time_info['remaining_minutes']:.1f}",
"time_percentage": f"{time_info['time_percentage']:.0f}%" "time_percentage": f"{time_info['time_percentage']:.0f}%",
} }
@ -230,52 +238,116 @@ async def entrypoint(ctx: JobContext):
logger.info("[INIT] Starting AI Interviewer Agent") logger.info("[INIT] Starting AI Interviewer Agent")
logger.info(f"[INIT] Room: {ctx.room.name}") logger.info(f"[INIT] Room: {ctx.room.name}")
# План интервью - получаем из переменной окружения # План интервью - получаем из метаданных сессии
room_metadata = os.environ.get("LIVEKIT_ROOM_METADATA", ctx.room.metadata or "{}") interview_plan = {}
session_id = None
try:
metadata = json.loads(room_metadata) # Проверяем файлы команд для получения сессии
interview_plan = metadata.get("interview_plan", {}) command_file = "agent_commands.json"
if not interview_plan: metadata_file = None
# Ожидаем команды от менеджера
for _ in range(60): # Ждем до 60 секунд
if os.path.exists(command_file):
try:
with open(command_file, encoding="utf-8") as f:
command = json.load(f)
if (
command.get("action") == "start_session"
and command.get("room_name") == ctx.room.name
):
session_id = command.get("session_id")
metadata_file = command.get("metadata_file")
logger.info(
f"[INIT] Received start_session command for session {session_id}"
)
break
except Exception as e:
logger.warning(f"[INIT] Failed to parse command file: {str(e)}")
await asyncio.sleep(1)
# Загружаем метаданные сессии
if metadata_file and os.path.exists(metadata_file):
try:
with open(metadata_file, encoding="utf-8") as f:
metadata = json.load(f)
interview_plan = metadata.get("interview_plan", {})
session_id = metadata.get("session_id", session_id)
logger.info(f"[INIT] Loaded interview plan for session {session_id}")
except Exception as e:
logger.warning(f"[INIT] Failed to load metadata: {str(e)}")
interview_plan = {} interview_plan = {}
except Exception as e:
logger.warning(f"[INIT] Failed to parse metadata: {str(e)}")
interview_plan = {}
# Используем дефолтный план если план пустой или нет секций # Используем дефолтный план если план пустой или нет секций
if not interview_plan or not interview_plan.get("interview_structure", {}).get("sections"): if not interview_plan or not interview_plan.get("interview_structure", {}).get(
logger.info(f"[INIT] Using default interview plan") "sections"
):
logger.info("[INIT] Using default interview plan")
interview_plan = { interview_plan = {
"interview_structure": { "interview_structure": {
"duration_minutes": 2, # ТЕСТОВЫЙ РЕЖИМ - 2 минуты "duration_minutes": 2, # ТЕСТОВЫЙ РЕЖИМ - 2 минуты
"greeting": "Привет! Это быстрое тестовое интервью на 2 минуты. Готов?", "greeting": "Привет! Это быстрое тестовое интервью на 2 минуты. Готов?",
"sections": [ "sections": [
{"name": "Знакомство", "duration_minutes": 1, "questions": ["Расскажи кратко о себе одним предложением"]}, {
{"name": "Завершение", "duration_minutes": 1, "questions": ["Спасибо! Есть вопросы ко мне?"]} "name": "Знакомство",
] "duration_minutes": 1,
"questions": ["Расскажи кратко о себе одним предложением"],
},
{
"name": "Завершение",
"duration_minutes": 1,
"questions": ["Спасибо! Есть вопросы ко мне?"],
},
],
},
"candidate_info": {
"name": "Тестовый кандидат",
"skills": ["Python", "React"],
"total_years": 3,
}, },
"candidate_info": {"name": "Тестовый кандидат", "skills": ["Python", "React"], "total_years": 3},
"focus_areas": ["quick_test"], "focus_areas": ["quick_test"],
"key_evaluation_points": ["Коммуникация"] "key_evaluation_points": ["Коммуникация"],
} }
interviewer = InterviewAgent(interview_plan) interviewer = InterviewAgent(interview_plan)
logger.info(f"[INIT] InterviewAgent created with {len(interviewer.sections)} sections") logger.info(
f"[INIT] InterviewAgent created with {len(interviewer.sections)} sections"
)
# STT # STT
stt = deepgram.STT(model="nova-2-general", language="ru", api_key=settings.deepgram_api_key) \ stt = (
if settings.deepgram_api_key else openai.STT(model="whisper-1", language="ru", api_key=settings.openai_api_key) deepgram.STT(
model="nova-2-general", language="ru", api_key=settings.deepgram_api_key
)
if settings.deepgram_api_key
else openai.STT(
model="whisper-1", language="ru", api_key=settings.openai_api_key
)
)
# LLM # LLM
llm = openai.LLM(model="gpt-4o-mini", api_key=settings.openai_api_key, temperature=0.7) llm = openai.LLM(
model="gpt-4o-mini", api_key=settings.openai_api_key, temperature=0.7
)
# TTS # TTS
tts = cartesia.TTS(model="sonic-turbo", language="ru", voice='da05e96d-ca10-4220-9042-d8acef654fa9', tts = (
api_key=settings.cartesia_api_key) if settings.cartesia_api_key else silero.TTS(language="ru", model="v4_ru") cartesia.TTS(
model="sonic-turbo",
language="ru",
voice="da05e96d-ca10-4220-9042-d8acef654fa9",
api_key=settings.cartesia_api_key,
)
if settings.cartesia_api_key
else silero.TTS(language="ru", model="v4_ru")
)
# Создаем обычный Agent и Session # Создаем обычный Agent и Session
agent = Agent(instructions=interviewer.get_system_instructions()) agent = Agent(instructions=interviewer.get_system_instructions())
# Создаем AgentSession с обычным TTS # Создаем AgentSession с обычным TTS
session = AgentSession(vad=silero.VAD.load(), stt=stt, llm=llm, tts=tts) session = AgentSession(vad=silero.VAD.load(), stt=stt, llm=llm, tts=tts)
@ -287,10 +359,16 @@ async def entrypoint(ctx: JobContext):
try: try:
interview_repo = InterviewRepository(db) interview_repo = InterviewRepository(db)
resume_repo = ResumeRepository(db) resume_repo = ResumeRepository(db)
finalization_service = InterviewFinalizationService(interview_repo, resume_repo) finalization_service = InterviewFinalizationService(
success = await finalization_service.save_dialogue_to_session(room_name, dialogue_history) interview_repo, resume_repo
)
success = await finalization_service.save_dialogue_to_session(
room_name, dialogue_history
)
if not success: if not success:
logger.warning(f"[DB] Failed to save dialogue for room: {room_name}") logger.warning(
f"[DB] Failed to save dialogue for room: {room_name}"
)
finally: finally:
await session_generator.aclose() await session_generator.aclose()
except Exception as e: except Exception as e:
@ -299,46 +377,54 @@ async def entrypoint(ctx: JobContext):
# --- Логика завершения интервью --- # --- Логика завершения интервью ---
async def finalize_interview(room_name: str, interviewer_instance): async def finalize_interview(room_name: str, interviewer_instance):
"""Завершение интервью и запуск анализа""" """Завершение интервью и запуск анализа"""
# Проверяем, не завершено ли уже интервью # Проверяем, не завершено ли уже интервью
if interviewer_instance.interview_finalized: if interviewer_instance.interview_finalized:
logger.info(f"[FINALIZE] Interview already finalized for room: {room_name}") logger.info(f"[FINALIZE] Interview already finalized for room: {room_name}")
return return
interviewer_instance.interview_finalized = True interviewer_instance.interview_finalized = True
try: try:
logger.info(f"[FINALIZE] Starting interview finalization for room: {room_name}") logger.info(
f"[FINALIZE] Starting interview finalization for room: {room_name}"
)
# Собираем метрики интервью # Собираем метрики интервью
time_info = interviewer_instance.get_time_info() time_info = interviewer_instance.get_time_info()
interview_metrics = { interview_metrics = {
"total_messages": interviewer_instance.questions_asked_total, "total_messages": interviewer_instance.questions_asked_total,
"dialogue_length": len(interviewer_instance.conversation_history), "dialogue_length": len(interviewer_instance.conversation_history),
"elapsed_minutes": time_info['elapsed_minutes'], "elapsed_minutes": time_info["elapsed_minutes"],
"planned_duration": time_info['duration_minutes'], "planned_duration": time_info["duration_minutes"],
"time_percentage": time_info['time_percentage'] "time_percentage": time_info["time_percentage"],
} }
session_generator = get_session() session_generator = get_session()
db = await anext(session_generator) db = await anext(session_generator)
try: try:
interview_repo = InterviewRepository(db) interview_repo = InterviewRepository(db)
resume_repo = ResumeRepository(db) resume_repo = ResumeRepository(db)
finalization_service = InterviewFinalizationService(interview_repo, resume_repo) finalization_service = InterviewFinalizationService(
interview_repo, resume_repo
)
# Используем сервис для завершения интервью # Используем сервис для завершения интервью
result = await finalization_service.finalize_interview( result = await finalization_service.finalize_interview(
room_name=room_name, room_name=room_name,
dialogue_history=interviewer_instance.conversation_history, dialogue_history=interviewer_instance.conversation_history,
interview_metrics=interview_metrics interview_metrics=interview_metrics,
) )
if result: if result:
logger.info(f"[FINALIZE] Interview successfully finalized: session_id={result['session_id']}, task_id={result['analysis_task_id']}") logger.info(
f"[FINALIZE] Interview successfully finalized: session_id={result['session_id']}, task_id={result['analysis_task_id']}"
)
else: else:
logger.error(f"[FINALIZE] Failed to finalize interview for room: {room_name}") logger.error(
f"[FINALIZE] Failed to finalize interview for room: {room_name}"
)
finally: finally:
await session_generator.aclose() await session_generator.aclose()
except Exception as e: except Exception as e:
@ -348,24 +434,58 @@ async def entrypoint(ctx: JobContext):
async def check_interview_completion_by_keywords(agent_text: str): async def check_interview_completion_by_keywords(agent_text: str):
"""Проверяет завершение интервью по ключевым фразам""" """Проверяет завершение интервью по ключевым фразам"""
# Ключевые фразы для завершения интервью # Ключевые фразы для завершения интервью
ending_keywords = [ ending_keywords = ["До скорой встречи"]
"До скорой встречи"
]
text_lower = agent_text.lower() text_lower = agent_text.lower()
for keyword in ending_keywords: for keyword in ending_keywords:
if keyword.lower() in text_lower: if keyword.lower() in text_lower:
logger.info(f"[KEYWORD_DETECTION] Found ending keyword: '{keyword}' in agent response") logger.info(
f"[KEYWORD_DETECTION] Found ending keyword: '{keyword}' in agent response"
)
if not interviewer.interview_finalized: if not interviewer.interview_finalized:
# Запускаем полную цепочку завершения интервью # Запускаем полную цепочку завершения интервью
await complete_interview_sequence(ctx.room.name, interviewer) await complete_interview_sequence(ctx.room.name, interviewer)
return True return True
break break
return False return False
# --- Мониторинг команд завершения ---
async def monitor_end_commands():
"""Мониторит команды завершения сессии"""
command_file = "agent_commands.json"
while not interviewer.interview_finalized:
try:
if os.path.exists(command_file):
with open(command_file, encoding="utf-8") as f:
command = json.load(f)
if (
command.get("action") == "end_session"
and command.get("session_id") == session_id
):
logger.info(
f"[COMMAND] Received end_session command for session {session_id}"
)
if not interviewer.interview_finalized:
await complete_interview_sequence(
ctx.room.name, interviewer
)
break
await asyncio.sleep(2) # Проверяем каждые 2 секунды
except Exception as e:
logger.error(f"[COMMAND] Error monitoring commands: {str(e)}")
await asyncio.sleep(5)
# Запускаем мониторинг команд в фоне
asyncio.create_task(monitor_end_commands())
# --- Полная цепочка завершения интервью --- # --- Полная цепочка завершения интервью ---
async def complete_interview_sequence(room_name: str, interviewer_instance): async def complete_interview_sequence(room_name: str, interviewer_instance):
""" """
@ -376,15 +496,15 @@ async def entrypoint(ctx: JobContext):
""" """
try: try:
logger.info("[SEQUENCE] Starting interview completion sequence") logger.info("[SEQUENCE] Starting interview completion sequence")
# Шаг 1: Финализируем интервью в БД # Шаг 1: Финализируем интервью в БД
logger.info("[SEQUENCE] Step 1: Finalizing interview in database") logger.info("[SEQUENCE] Step 1: Finalizing interview in database")
await finalize_interview(room_name, interviewer_instance) await finalize_interview(room_name, interviewer_instance)
logger.info("[SEQUENCE] Step 1: Database finalization completed") logger.info("[SEQUENCE] Step 1: Database finalization completed")
# Даём время на завершение всех DB операций # Даём время на завершение всех DB операций
await asyncio.sleep(1) await asyncio.sleep(1)
# Шаг 2: Закрываем комнату LiveKit # Шаг 2: Закрываем комнату LiveKit
logger.info("[SEQUENCE] Step 2: Closing LiveKit room") logger.info("[SEQUENCE] Step 2: Closing LiveKit room")
try: try:
@ -392,48 +512,54 @@ async def entrypoint(ctx: JobContext):
logger.info(f"[SEQUENCE] Step 2: Room {room_name} closed successfully") logger.info(f"[SEQUENCE] Step 2: Room {room_name} closed successfully")
except Exception as e: except Exception as e:
logger.error(f"[SEQUENCE] Step 2: Failed to close room: {str(e)}") logger.error(f"[SEQUENCE] Step 2: Failed to close room: {str(e)}")
logger.info("[SEQUENCE] Step 2: Room closure failed, but continuing sequence") logger.info(
"[SEQUENCE] Step 2: Room closure failed, but continuing sequence"
)
# Шаг 3: Завершаем процесс агента # Шаг 3: Завершаем процесс агента
logger.info("[SEQUENCE] Step 3: Terminating agent process") logger.info("[SEQUENCE] Step 3: Terminating agent process")
await asyncio.sleep(2) # Даём время на завершение всех операций await asyncio.sleep(2) # Даём время на завершение всех операций
logger.info("[SEQUENCE] Step 3: Force terminating agent process") logger.info("[SEQUENCE] Step 3: Force terminating agent process")
import os import os
os._exit(0) # Принудительное завершение процесса os._exit(0) # Принудительное завершение процесса
except Exception as e: except Exception as e:
logger.error(f"[SEQUENCE] Error in interview completion sequence: {str(e)}") logger.error(f"[SEQUENCE] Error in interview completion sequence: {str(e)}")
# Fallback: принудительно завершаем процесс даже при ошибках # Fallback: принудительно завершаем процесс даже при ошибках
logger.info("[SEQUENCE] Fallback: Force terminating process") logger.info("[SEQUENCE] Fallback: Force terminating process")
await asyncio.sleep(1) await asyncio.sleep(1)
import os import os
os._exit(1)
os._exit(1)
# --- Упрощенная логика обработки пользовательского ответа --- # --- Упрощенная логика обработки пользовательского ответа ---
async def handle_user_input(user_response: str): async def handle_user_input(user_response: str):
current_section = interviewer.get_current_section() current_section = interviewer.get_current_section()
# Сохраняем ответ пользователя # Сохраняем ответ пользователя
dialogue_message = { dialogue_message = {
"role": "user", "role": "user",
"content": str(user_response).encode('utf-8').decode('utf-8'), # Принудительное UTF-8 "content": str(user_response)
.encode("utf-8")
.decode("utf-8"), # Принудительное UTF-8
"timestamp": datetime.utcnow().isoformat(), "timestamp": datetime.utcnow().isoformat(),
"section": current_section.get('name', 'Unknown') "section": current_section.get("name", "Unknown"),
} }
interviewer.conversation_history.append(dialogue_message) interviewer.conversation_history.append(dialogue_message)
await save_dialogue_to_db(ctx.room.name, interviewer.conversation_history) await save_dialogue_to_db(ctx.room.name, interviewer.conversation_history)
# Обновляем прогресс интервью # Обновляем прогресс интервью
if not interviewer.intro_done: if not interviewer.intro_done:
interviewer.intro_done = True interviewer.intro_done = True
# Обновляем счетчик сообщений и треким время # Обновляем счетчик сообщений и треким время
interviewer.questions_asked_total += 1 interviewer.questions_asked_total += 1
progress_info = await interviewer.track_interview_progress(user_response) progress_info = await interviewer.track_interview_progress(user_response)
logger.info(f"[PROGRESS] Messages: {progress_info['questions_asked']}, Time: {progress_info['elapsed_minutes']}min/{progress_info['time_percentage']}") logger.info(
f"[PROGRESS] Messages: {progress_info['questions_asked']}, Time: {progress_info['elapsed_minutes']}min/{progress_info['time_percentage']}"
)
# Обновляем инструкции агента с текущим прогрессом # Обновляем инструкции агента с текущим прогрессом
try: try:
updated_instructions = interviewer.get_system_instructions() updated_instructions = interviewer.get_system_instructions()
@ -443,7 +569,7 @@ async def entrypoint(ctx: JobContext):
@session.on("conversation_item_added") @session.on("conversation_item_added")
def on_conversation_item(event): def on_conversation_item(event):
role = event.item.role role = event.item.role
text = event.item.text_content text = event.item.text_content
if role == "user": if role == "user":
@ -451,27 +577,34 @@ async def entrypoint(ctx: JobContext):
elif role == "assistant": elif role == "assistant":
# Сохраняем ответ агента в историю диалога # Сохраняем ответ агента в историю диалога
current_section = interviewer.get_current_section() current_section = interviewer.get_current_section()
interviewer.conversation_history.append({ interviewer.conversation_history.append(
"role": "assistant", {
"content": str(text).encode('utf-8').decode('utf-8'), # Принудительное UTF-8 "role": "assistant",
"timestamp": datetime.utcnow().isoformat(), "content": str(text)
"section": current_section.get('name', 'Unknown') .encode("utf-8")
}) .decode("utf-8"), # Принудительное UTF-8
"timestamp": datetime.utcnow().isoformat(),
"section": current_section.get("name", "Unknown"),
}
)
# Сохраняем диалог в БД # Сохраняем диалог в БД
asyncio.create_task(save_dialogue_to_db(ctx.room.name, interviewer.conversation_history)) asyncio.create_task(
save_dialogue_to_db(ctx.room.name, interviewer.conversation_history)
)
# Проверяем ключевые фразы для завершения интервью # Проверяем ключевые фразы для завершения интервью
asyncio.create_task(check_interview_completion_by_keywords(text)) asyncio.create_task(check_interview_completion_by_keywords(text))
await session.start(agent=agent, room=ctx.room) await session.start(agent=agent, room=ctx.room)
logger.info("[INIT] AI Interviewer started") logger.info("[INIT] AI Interviewer started")
def main(): def main():
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # фикс для Windows asyncio.set_event_loop_policy(
asyncio.WindowsSelectorEventLoopPolicy()
) # фикс для Windows
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint)) cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))

View File

@ -1,50 +1,49 @@
from pydantic_settings import BaseSettings from pydantic_settings import BaseSettings
from typing import Optional
class Settings(BaseSettings): class Settings(BaseSettings):
# Database # Database
database_url: str = "postgresql+asyncpg://tdjx:1309@localhost:5432/hr_ai" database_url: str = "postgresql+asyncpg://tdjx:1309@localhost:5432/hr_ai"
# Redis Configuration (for Celery and caching) # Redis Configuration (for Celery and caching)
redis_cache_url: str = "localhost" redis_cache_url: str = "localhost"
redis_cache_port: int = 6379 redis_cache_port: int = 6379
redis_cache_db: int = 0 redis_cache_db: int = 0
# Milvus Vector Database # Milvus Vector Database
milvus_uri: str = "http://localhost:19530" milvus_uri: str = "http://localhost:19530"
milvus_collection: str = "candidate_profiles" milvus_collection: str = "candidate_profiles"
# S3 Storage # S3 Storage
s3_endpoint_url: str = "https://s3.selcdn.ru" s3_endpoint_url: str = "https://s3.selcdn.ru"
s3_access_key_id: str s3_access_key_id: str
s3_secret_access_key: str s3_secret_access_key: str
s3_bucket_name: str s3_bucket_name: str
s3_region: str = "ru-1" s3_region: str = "ru-1"
# LLM API Keys # LLM API Keys
openai_api_key: Optional[str] = None openai_api_key: str | None = None
anthropic_api_key: Optional[str] = None anthropic_api_key: str | None = None
openai_model: str = "gpt-4o-mini" openai_model: str = "gpt-4o-mini"
openai_embeddings_model: str = "text-embedding-3-small" openai_embeddings_model: str = "text-embedding-3-small"
# AI Agent API Keys (for voice interviewer) # AI Agent API Keys (for voice interviewer)
deepgram_api_key: Optional[str] = None deepgram_api_key: str | None = None
cartesia_api_key: Optional[str] = None cartesia_api_key: str | None = None
elevenlabs_api_key: Optional[str] = None elevenlabs_api_key: str | None = None
resemble_api_key: Optional[str] = None resemble_api_key: str | None = None
# LiveKit Configuration # LiveKit Configuration
livekit_url: str = "ws://localhost:7880" livekit_url: str = "ws://localhost:7880"
livekit_api_key: str = "devkey" livekit_api_key: str = "devkey"
livekit_api_secret: str = "devkey_secret_32chars_minimum_length" livekit_api_secret: str = "devkey_secret_32chars_minimum_length"
# App Configuration # App Configuration
app_env: str = "development" app_env: str = "development"
debug: bool = True debug: bool = True
class Config: class Config:
env_file = ".env" env_file = ".env"
settings = Settings() settings = Settings()

View File

@ -1,7 +1,8 @@
from typing import AsyncGenerator, Generator from collections.abc import AsyncGenerator, Generator
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy import create_engine from sqlalchemy import create_engine
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.orm import Session, sessionmaker
from sqlmodel import SQLModel from sqlmodel import SQLModel
from .config import settings from .config import settings
@ -57,4 +58,4 @@ def get_sync_session() -> Generator[Session, None, None]:
async def create_db_and_tables(): async def create_db_and_tables():
"""Создать таблицы в БД""" """Создать таблицы в БД"""
async with async_engine.begin() as conn: async with async_engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all) await conn.run_sync(SQLModel.metadata.create_all)

View File

@ -1,52 +1,52 @@
import uuid
import boto3 import boto3
from botocore.exceptions import ClientError from botocore.exceptions import ClientError
from typing import Optional
import uuid
from app.core.config import settings from app.core.config import settings
class S3Service: class S3Service:
def __init__(self): def __init__(self):
self.s3_client = boto3.client( self.s3_client = boto3.client(
's3', "s3",
endpoint_url=settings.s3_endpoint_url, endpoint_url=settings.s3_endpoint_url,
aws_access_key_id=settings.s3_access_key_id, aws_access_key_id=settings.s3_access_key_id,
aws_secret_access_key=settings.s3_secret_access_key, aws_secret_access_key=settings.s3_secret_access_key,
region_name=settings.s3_region region_name=settings.s3_region,
) )
self.bucket_name = settings.s3_bucket_name self.bucket_name = settings.s3_bucket_name
async def upload_file(self, file_content: bytes, file_name: str, content_type: str) -> Optional[str]: async def upload_file(
self, file_content: bytes, file_name: str, content_type: str
) -> str | None:
try: try:
file_key = f"{uuid.uuid4()}_{file_name}" file_key = f"{uuid.uuid4()}_{file_name}"
self.s3_client.put_object( self.s3_client.put_object(
Bucket=self.bucket_name, Bucket=self.bucket_name,
Key=file_key, Key=file_key,
Body=file_content, Body=file_content,
ContentType=content_type ContentType=content_type,
) )
file_url = f"{settings.s3_endpoint_url}/{self.bucket_name}/{file_key}" file_url = f"{settings.s3_endpoint_url}/{self.bucket_name}/{file_key}"
return file_url return file_url
except ClientError as e: except ClientError as e:
print(f"Error uploading file to S3: {e}") print(f"Error uploading file to S3: {e}")
return None return None
async def delete_file(self, file_url: str) -> bool: async def delete_file(self, file_url: str) -> bool:
try: try:
file_key = file_url.split('/')[-1] file_key = file_url.split("/")[-1]
self.s3_client.delete_object( self.s3_client.delete_object(Bucket=self.bucket_name, Key=file_key)
Bucket=self.bucket_name,
Key=file_key
)
return True return True
except ClientError as e: except ClientError as e:
print(f"Error deleting file from S3: {e}") print(f"Error deleting file from S3: {e}")
return False return False
s3_service = S3Service() s3_service = S3Service()

View File

@ -1,40 +1,46 @@
import logging
from fastapi import Request, Response from fastapi import Request, Response
from fastapi.responses import JSONResponse from fastapi.responses import JSONResponse
from starlette.middleware.base import BaseHTTPMiddleware from starlette.middleware.base import BaseHTTPMiddleware
from starlette.types import ASGIApp from starlette.types import ASGIApp
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.database import get_session
from app.repositories.session_repository import SessionRepository
from app.models.session import Session
import logging
from app.core.database import get_session
from app.models.session import Session
from app.repositories.session_repository import SessionRepository
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class SessionMiddleware(BaseHTTPMiddleware): class SessionMiddleware(BaseHTTPMiddleware):
"""Middleware для автоматического управления сессиями""" """Middleware для автоматического управления сессиями"""
def __init__(self, app: ASGIApp, cookie_name: str = "session_id"): def __init__(self, app: ASGIApp, cookie_name: str = "session_id"):
super().__init__(app) super().__init__(app)
self.cookie_name = cookie_name self.cookie_name = cookie_name
async def dispatch(self, request: Request, call_next): async def dispatch(self, request: Request, call_next):
# Пропускаем статические файлы, служебные эндпоинты и OPTIONS запросы # Пропускаем статические файлы, служебные эндпоинты и OPTIONS запросы
if (request.url.path.startswith(("/docs", "/redoc", "/openapi.json", "/health", "/favicon.ico")) or if (
request.method == "OPTIONS"): request.url.path.startswith(
("/docs", "/redoc", "/openapi.json", "/health", "/favicon.ico")
)
or request.method == "OPTIONS"
):
return await call_next(request) return await call_next(request)
# Получаем session_id из cookie или заголовка # Получаем session_id из cookie или заголовка
session_id = request.cookies.get(self.cookie_name) or request.headers.get("X-Session-ID") session_id = request.cookies.get(self.cookie_name) or request.headers.get(
"X-Session-ID"
)
session_obj = None session_obj = None
try: try:
# Работаем с БД в рамках одной async сессии # Работаем с БД в рамках одной async сессии
async for db_session in get_session(): async for db_session in get_session():
session_repo = SessionRepository(db_session) session_repo = SessionRepository(db_session)
# Проверяем существующую сессию # Проверяем существующую сессию
if session_id: if session_id:
session_obj = await session_repo.get_by_session_id(session_id) session_obj = await session_repo.get_by_session_id(session_id)
@ -47,10 +53,13 @@ class SessionMiddleware(BaseHTTPMiddleware):
# Создаем новую сессию, если нет действующей # Создаем новую сессию, если нет действующей
if not session_obj: if not session_obj:
user_agent = request.headers.get("User-Agent") user_agent = request.headers.get("User-Agent")
client_ip = getattr(request.client, 'host', None) if request.client else None client_ip = (
getattr(request.client, "host", None)
if request.client
else None
)
session_obj = await session_repo.create_session( session_obj = await session_repo.create_session(
user_agent=user_agent, user_agent=user_agent, ip_address=client_ip
ip_address=client_ip
) )
logger.info(f"Created new session: {session_obj.session_id}") logger.info(f"Created new session: {session_obj.session_id}")
@ -61,13 +70,12 @@ class SessionMiddleware(BaseHTTPMiddleware):
except Exception as e: except Exception as e:
logger.error(f"Session middleware error: {e}") logger.error(f"Session middleware error: {e}")
return JSONResponse( return JSONResponse(
status_code=500, status_code=500, content={"error": "Session management error"}
content={"error": "Session management error"}
) )
# Выполняем запрос # Выполняем запрос
response = await call_next(request) response = await call_next(request)
# Устанавливаем cookie с session_id в ответе # Устанавливаем cookie с session_id в ответе
if session_obj and isinstance(response, Response): if session_obj and isinstance(response, Response):
response.set_cookie( response.set_cookie(
@ -76,7 +84,7 @@ class SessionMiddleware(BaseHTTPMiddleware):
max_age=30 * 24 * 60 * 60, # 30 дней max_age=30 * 24 * 60 * 60, # 30 дней
httponly=True, httponly=True,
secure=False, # Для dev среды secure=False, # Для dev среды
samesite="lax" samesite="lax",
) )
return response return response
@ -84,4 +92,4 @@ class SessionMiddleware(BaseHTTPMiddleware):
async def get_current_session(request: Request) -> Session: async def get_current_session(request: Request) -> Session:
"""Получить текущую сессию из контекста запроса""" """Получить текущую сессию из контекста запроса"""
return getattr(request.state, 'session', None) return getattr(request.state, "session", None)

View File

@ -1,37 +1,37 @@
from .vacancy import Vacancy, VacancyCreate, VacancyUpdate, VacancyRead
from .resume import Resume, ResumeCreate, ResumeUpdate, ResumeRead
from .session import Session, SessionCreate, SessionRead
from .interview import ( from .interview import (
InterviewSession, InterviewSession,
InterviewSessionCreate, InterviewSessionCreate,
InterviewSessionUpdate,
InterviewSessionRead, InterviewSessionRead,
InterviewStatus InterviewSessionUpdate,
InterviewStatus,
) )
from .interview_report import ( from .interview_report import (
InterviewReport, InterviewReport,
InterviewReportCreate, InterviewReportCreate,
InterviewReportUpdate,
InterviewReportRead, InterviewReportRead,
InterviewReportSummary, InterviewReportSummary,
RecommendationType InterviewReportUpdate,
RecommendationType,
) )
from .resume import Resume, ResumeCreate, ResumeRead, ResumeUpdate
from .session import Session, SessionCreate, SessionRead
from .vacancy import Vacancy, VacancyCreate, VacancyRead, VacancyUpdate
__all__ = [ __all__ = [
"Vacancy", "Vacancy",
"VacancyCreate", "VacancyCreate",
"VacancyUpdate", "VacancyUpdate",
"VacancyRead", "VacancyRead",
"Resume", "Resume",
"ResumeCreate", "ResumeCreate",
"ResumeUpdate", "ResumeUpdate",
"ResumeRead", "ResumeRead",
"Session", "Session",
"SessionCreate", "SessionCreate",
"SessionRead", "SessionRead",
"InterviewSession", "InterviewSession",
"InterviewSessionCreate", "InterviewSessionCreate",
"InterviewSessionUpdate", "InterviewSessionUpdate",
"InterviewSessionRead", "InterviewSessionRead",
"InterviewStatus", "InterviewStatus",
"InterviewReport", "InterviewReport",
@ -40,4 +40,4 @@ __all__ = [
"InterviewReportRead", "InterviewReportRead",
"InterviewReportSummary", "InterviewReportSummary",
"RecommendationType", "RecommendationType",
] ]

View File

@ -1,8 +1,9 @@
from sqlmodel import SQLModel, Field, Column, Relationship
from sqlalchemy import Enum as SQLEnum, JSON
from datetime import datetime from datetime import datetime
from typing import Optional, List, Dict, Any
from enum import Enum from enum import Enum
from typing import Any, Optional
from sqlalchemy import JSON
from sqlmodel import Column, Field, Relationship, SQLModel
class InterviewStatus(str, Enum): class InterviewStatus(str, Enum):
@ -10,7 +11,7 @@ class InterviewStatus(str, Enum):
ACTIVE = "active" ACTIVE = "active"
COMPLETED = "completed" COMPLETED = "completed"
FAILED = "failed" FAILED = "failed"
def __str__(self): def __str__(self):
return self.value return self.value
@ -19,24 +20,29 @@ class InterviewSessionBase(SQLModel):
resume_id: int = Field(foreign_key="resume.id") resume_id: int = Field(foreign_key="resume.id")
room_name: str = Field(max_length=255, unique=True) room_name: str = Field(max_length=255, unique=True)
status: str = Field(default="created", max_length=50) status: str = Field(default="created", max_length=50)
transcript: Optional[str] = None transcript: str | None = None
ai_feedback: Optional[str] = None ai_feedback: str | None = None
dialogue_history: Optional[List[Dict[str, Any]]] = Field(default=None, sa_column=Column(JSON)) dialogue_history: list[dict[str, Any]] | None = Field(
default=None, sa_column=Column(JSON)
)
# Добавляем отслеживание AI процесса # Добавляем отслеживание AI процесса
ai_agent_pid: Optional[int] = None ai_agent_pid: int | None = None
ai_agent_status: str = Field(default="not_started") # not_started, running, stopped, failed ai_agent_status: str = Field(
default="not_started"
) # not_started, running, stopped, failed
class InterviewSession(InterviewSessionBase, table=True): class InterviewSession(InterviewSessionBase, table=True):
__tablename__ = "interview_sessions" __tablename__ = "interview_sessions"
id: Optional[int] = Field(default=None, primary_key=True) id: int | None = Field(default=None, primary_key=True)
started_at: datetime = Field(default_factory=datetime.utcnow) started_at: datetime = Field(default_factory=datetime.utcnow)
completed_at: Optional[datetime] = None completed_at: datetime | None = None
# Связь с отчетом (один к одному) # Связь с отчетом (один к одному)
report: Optional["InterviewReport"] = Relationship(back_populates="interview_session") report: Optional["InterviewReport"] = Relationship(
back_populates="interview_session"
)
class InterviewSessionCreate(SQLModel): class InterviewSessionCreate(SQLModel):
@ -45,17 +51,17 @@ class InterviewSessionCreate(SQLModel):
class InterviewSessionUpdate(SQLModel): class InterviewSessionUpdate(SQLModel):
status: Optional[InterviewStatus] = None status: InterviewStatus | None = None
completed_at: Optional[datetime] = None completed_at: datetime | None = None
transcript: Optional[str] = None transcript: str | None = None
ai_feedback: Optional[str] = None ai_feedback: str | None = None
dialogue_history: Optional[List[Dict[str, Any]]] = None dialogue_history: list[dict[str, Any]] | None = None
class InterviewSessionRead(InterviewSessionBase): class InterviewSessionRead(InterviewSessionBase):
id: int id: int
started_at: datetime started_at: datetime
completed_at: Optional[datetime] = None completed_at: datetime | None = None
class InterviewValidationResponse(SQLModel): class InterviewValidationResponse(SQLModel):
@ -66,4 +72,4 @@ class InterviewValidationResponse(SQLModel):
class LiveKitTokenResponse(SQLModel): class LiveKitTokenResponse(SQLModel):
token: str token: str
room_name: str room_name: str
server_url: str server_url: str

View File

@ -1,176 +1,191 @@
# -*- coding: utf-8 -*-
from sqlmodel import SQLModel, Field, Column, Relationship
from sqlalchemy import JSON, String, Integer, Float, Text
from datetime import datetime from datetime import datetime
from typing import Optional, List, Dict, Any
from enum import Enum from enum import Enum
from typing import Any, Optional
from sqlalchemy import JSON, Text
from sqlmodel import Column, Field, Relationship, SQLModel
class RecommendationType(str, Enum): class RecommendationType(str, Enum):
STRONGLY_RECOMMEND = "strongly_recommend" STRONGLY_RECOMMEND = "strongly_recommend"
RECOMMEND = "recommend" RECOMMEND = "recommend"
CONSIDER = "consider" CONSIDER = "consider"
REJECT = "reject" REJECT = "reject"
def __str__(self): def __str__(self):
return self.value return self.value
class InterviewReportBase(SQLModel): class InterviewReportBase(SQLModel):
"""Базовая модель отчета по интервью""" """Базовая модель отчета по интервью"""
interview_session_id: int = Field(foreign_key="interview_sessions.id", unique=True) interview_session_id: int = Field(foreign_key="interview_sessions.id", unique=True)
# Основные критерии оценки (0-100) # Основные критерии оценки (0-100)
technical_skills_score: int = Field(ge=0, le=100) technical_skills_score: int = Field(ge=0, le=100)
technical_skills_justification: Optional[str] = Field(default=None, max_length=1000) technical_skills_justification: str | None = Field(default=None, max_length=1000)
technical_skills_concerns: Optional[str] = Field(default=None, max_length=500) technical_skills_concerns: str | None = Field(default=None, max_length=500)
experience_relevance_score: int = Field(ge=0, le=100) experience_relevance_score: int = Field(ge=0, le=100)
experience_relevance_justification: Optional[str] = Field(default=None, max_length=1000) experience_relevance_justification: str | None = Field(
experience_relevance_concerns: Optional[str] = Field(default=None, max_length=500) default=None, max_length=1000
)
experience_relevance_concerns: str | None = Field(default=None, max_length=500)
communication_score: int = Field(ge=0, le=100) communication_score: int = Field(ge=0, le=100)
communication_justification: Optional[str] = Field(default=None, max_length=1000) communication_justification: str | None = Field(default=None, max_length=1000)
communication_concerns: Optional[str] = Field(default=None, max_length=500) communication_concerns: str | None = Field(default=None, max_length=500)
problem_solving_score: int = Field(ge=0, le=100) problem_solving_score: int = Field(ge=0, le=100)
problem_solving_justification: Optional[str] = Field(default=None, max_length=1000) problem_solving_justification: str | None = Field(default=None, max_length=1000)
problem_solving_concerns: Optional[str] = Field(default=None, max_length=500) problem_solving_concerns: str | None = Field(default=None, max_length=500)
cultural_fit_score: int = Field(ge=0, le=100) cultural_fit_score: int = Field(ge=0, le=100)
cultural_fit_justification: Optional[str] = Field(default=None, max_length=1000) cultural_fit_justification: str | None = Field(default=None, max_length=1000)
cultural_fit_concerns: Optional[str] = Field(default=None, max_length=500) cultural_fit_concerns: str | None = Field(default=None, max_length=500)
# Агрегированные поля # Агрегированные поля
overall_score: int = Field(ge=0, le=100) overall_score: int = Field(ge=0, le=100)
recommendation: RecommendationType recommendation: RecommendationType
# Дополнительные поля для анализа # Дополнительные поля для анализа
strengths: Optional[List[str]] = Field(default=None, sa_column=Column(JSON)) strengths: list[str] | None = Field(default=None, sa_column=Column(JSON))
weaknesses: Optional[List[str]] = Field(default=None, sa_column=Column(JSON)) weaknesses: list[str] | None = Field(default=None, sa_column=Column(JSON))
red_flags: Optional[List[str]] = Field(default=None, sa_column=Column(JSON)) red_flags: list[str] | None = Field(default=None, sa_column=Column(JSON))
# Метрики интервью # Метрики интервью
questions_quality_score: Optional[float] = Field(default=None, ge=0, le=10) # Средняя оценка ответов questions_quality_score: float | None = Field(
interview_duration_minutes: Optional[int] = Field(default=None, ge=0) default=None, ge=0, le=10
response_count: Optional[int] = Field(default=None, ge=0) ) # Средняя оценка ответов
dialogue_messages_count: Optional[int] = Field(default=None, ge=0) interview_duration_minutes: int | None = Field(default=None, ge=0)
response_count: int | None = Field(default=None, ge=0)
dialogue_messages_count: int | None = Field(default=None, ge=0)
# Дополнительная информация # Дополнительная информация
next_steps: Optional[str] = Field(default=None, max_length=1000) next_steps: str | None = Field(default=None, max_length=1000)
interviewer_notes: Optional[str] = Field(default=None, sa_column=Column(Text)) interviewer_notes: str | None = Field(default=None, sa_column=Column(Text))
# Детальный анализ вопросов (JSON) # Детальный анализ вопросов (JSON)
questions_analysis: Optional[List[Dict[str, Any]]] = Field(default=None, sa_column=Column(JSON)) questions_analysis: list[dict[str, Any]] | None = Field(
default=None, sa_column=Column(JSON)
)
# Метаданные анализа # Метаданные анализа
analysis_method: Optional[str] = Field(default="openai_gpt4", max_length=50) # openai_gpt4, fallback_heuristic analysis_method: str | None = Field(
llm_model_used: Optional[str] = Field(default=None, max_length=100) default="openai_gpt4", max_length=50
analysis_duration_seconds: Optional[int] = Field(default=None, ge=0) ) # openai_gpt4, fallback_heuristic
llm_model_used: str | None = Field(default=None, max_length=100)
analysis_duration_seconds: int | None = Field(default=None, ge=0)
class InterviewReport(InterviewReportBase, table=True): class InterviewReport(InterviewReportBase, table=True):
"""Полный отчет по интервью с ID и временными метками""" """Полный отчет по интервью с ID и временными метками"""
__tablename__ = "interview_reports" __tablename__ = "interview_reports"
id: Optional[int] = Field(default=None, primary_key=True) id: int | None = Field(default=None, primary_key=True)
created_at: datetime = Field(default_factory=datetime.utcnow) created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow) updated_at: datetime = Field(default_factory=datetime.utcnow)
# Связь с сессией интервью # Связь с сессией интервью
interview_session: Optional["InterviewSession"] = Relationship(back_populates="report") interview_session: Optional["InterviewSession"] = Relationship(
back_populates="report"
)
class InterviewReportCreate(SQLModel): class InterviewReportCreate(SQLModel):
"""Модель для создания отчета""" """Модель для создания отчета"""
interview_session_id: int interview_session_id: int
technical_skills_score: int = Field(ge=0, le=100) technical_skills_score: int = Field(ge=0, le=100)
technical_skills_justification: Optional[str] = None technical_skills_justification: str | None = None
technical_skills_concerns: Optional[str] = None technical_skills_concerns: str | None = None
experience_relevance_score: int = Field(ge=0, le=100) experience_relevance_score: int = Field(ge=0, le=100)
experience_relevance_justification: Optional[str] = None experience_relevance_justification: str | None = None
experience_relevance_concerns: Optional[str] = None experience_relevance_concerns: str | None = None
communication_score: int = Field(ge=0, le=100) communication_score: int = Field(ge=0, le=100)
communication_justification: Optional[str] = None communication_justification: str | None = None
communication_concerns: Optional[str] = None communication_concerns: str | None = None
problem_solving_score: int = Field(ge=0, le=100) problem_solving_score: int = Field(ge=0, le=100)
problem_solving_justification: Optional[str] = None problem_solving_justification: str | None = None
problem_solving_concerns: Optional[str] = None problem_solving_concerns: str | None = None
cultural_fit_score: int = Field(ge=0, le=100) cultural_fit_score: int = Field(ge=0, le=100)
cultural_fit_justification: Optional[str] = None cultural_fit_justification: str | None = None
cultural_fit_concerns: Optional[str] = None cultural_fit_concerns: str | None = None
overall_score: int = Field(ge=0, le=100) overall_score: int = Field(ge=0, le=100)
recommendation: RecommendationType recommendation: RecommendationType
strengths: Optional[List[str]] = None strengths: list[str] | None = None
weaknesses: Optional[List[str]] = None weaknesses: list[str] | None = None
red_flags: Optional[List[str]] = None red_flags: list[str] | None = None
questions_quality_score: Optional[float] = None questions_quality_score: float | None = None
interview_duration_minutes: Optional[int] = None interview_duration_minutes: int | None = None
response_count: Optional[int] = None response_count: int | None = None
dialogue_messages_count: Optional[int] = None dialogue_messages_count: int | None = None
next_steps: Optional[str] = None next_steps: str | None = None
interviewer_notes: Optional[str] = None interviewer_notes: str | None = None
questions_analysis: Optional[List[Dict[str, Any]]] = None questions_analysis: list[dict[str, Any]] | None = None
analysis_method: Optional[str] = "openai_gpt4" analysis_method: str | None = "openai_gpt4"
llm_model_used: Optional[str] = None llm_model_used: str | None = None
analysis_duration_seconds: Optional[int] = None analysis_duration_seconds: int | None = None
class InterviewReportUpdate(SQLModel): class InterviewReportUpdate(SQLModel):
"""Модель для обновления отчета""" """Модель для обновления отчета"""
technical_skills_score: Optional[int] = Field(default=None, ge=0, le=100)
technical_skills_justification: Optional[str] = None technical_skills_score: int | None = Field(default=None, ge=0, le=100)
technical_skills_concerns: Optional[str] = None technical_skills_justification: str | None = None
technical_skills_concerns: str | None = None
experience_relevance_score: Optional[int] = Field(default=None, ge=0, le=100)
experience_relevance_justification: Optional[str] = None experience_relevance_score: int | None = Field(default=None, ge=0, le=100)
experience_relevance_concerns: Optional[str] = None experience_relevance_justification: str | None = None
experience_relevance_concerns: str | None = None
communication_score: Optional[int] = Field(default=None, ge=0, le=100)
communication_justification: Optional[str] = None communication_score: int | None = Field(default=None, ge=0, le=100)
communication_concerns: Optional[str] = None communication_justification: str | None = None
communication_concerns: str | None = None
problem_solving_score: Optional[int] = Field(default=None, ge=0, le=100)
problem_solving_justification: Optional[str] = None problem_solving_score: int | None = Field(default=None, ge=0, le=100)
problem_solving_concerns: Optional[str] = None problem_solving_justification: str | None = None
problem_solving_concerns: str | None = None
cultural_fit_score: Optional[int] = Field(default=None, ge=0, le=100)
cultural_fit_justification: Optional[str] = None cultural_fit_score: int | None = Field(default=None, ge=0, le=100)
cultural_fit_concerns: Optional[str] = None cultural_fit_justification: str | None = None
cultural_fit_concerns: str | None = None
overall_score: Optional[int] = Field(default=None, ge=0, le=100)
recommendation: Optional[RecommendationType] = None overall_score: int | None = Field(default=None, ge=0, le=100)
recommendation: RecommendationType | None = None
strengths: Optional[List[str]] = None
weaknesses: Optional[List[str]] = None strengths: list[str] | None = None
red_flags: Optional[List[str]] = None weaknesses: list[str] | None = None
red_flags: list[str] | None = None
questions_quality_score: Optional[float] = None
interview_duration_minutes: Optional[int] = None questions_quality_score: float | None = None
response_count: Optional[int] = None interview_duration_minutes: int | None = None
dialogue_messages_count: Optional[int] = None response_count: int | None = None
dialogue_messages_count: int | None = None
next_steps: Optional[str] = None
interviewer_notes: Optional[str] = None next_steps: str | None = None
questions_analysis: Optional[List[Dict[str, Any]]] = None interviewer_notes: str | None = None
questions_analysis: list[dict[str, Any]] | None = None
analysis_method: Optional[str] = None
llm_model_used: Optional[str] = None analysis_method: str | None = None
analysis_duration_seconds: Optional[int] = None llm_model_used: str | None = None
analysis_duration_seconds: int | None = None
class InterviewReportRead(InterviewReportBase): class InterviewReportRead(InterviewReportBase):
"""Модель для чтения отчета с ID и временными метками""" """Модель для чтения отчета с ID и временными метками"""
id: int id: int
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime
@ -178,22 +193,23 @@ class InterviewReportRead(InterviewReportBase):
class InterviewReportSummary(SQLModel): class InterviewReportSummary(SQLModel):
"""Краткая сводка отчета для списков""" """Краткая сводка отчета для списков"""
id: int id: int
interview_session_id: int interview_session_id: int
overall_score: int overall_score: int
recommendation: RecommendationType recommendation: RecommendationType
created_at: datetime created_at: datetime
# Основные баллы # Основные баллы
technical_skills_score: int technical_skills_score: int
experience_relevance_score: int experience_relevance_score: int
communication_score: int communication_score: int
problem_solving_score: int problem_solving_score: int
cultural_fit_score: int cultural_fit_score: int
# Краткие выводы # Краткие выводы
strengths: Optional[List[str]] = None strengths: list[str] | None = None
red_flags: Optional[List[str]] = None red_flags: list[str] | None = None
# Индексы для эффективных запросов по скорингу # Индексы для эффективных запросов по скорингу
@ -204,4 +220,4 @@ CREATE INDEX idx_interview_reports_recommendation ON interview_reports (recommen
CREATE INDEX idx_interview_reports_technical_skills ON interview_reports (technical_skills_score DESC); CREATE INDEX idx_interview_reports_technical_skills ON interview_reports (technical_skills_score DESC);
CREATE INDEX idx_interview_reports_communication ON interview_reports (communication_score DESC); CREATE INDEX idx_interview_reports_communication ON interview_reports (communication_score DESC);
CREATE INDEX idx_interview_reports_session_id ON interview_reports (interview_session_id); CREATE INDEX idx_interview_reports_session_id ON interview_reports (interview_session_id);
""" """

View File

@ -1,13 +1,13 @@
from sqlmodel import SQLModel, Field, Relationship, Column
from sqlalchemy import JSON
from typing import Optional
from datetime import datetime from datetime import datetime
from enum import Enum from enum import Enum
from sqlalchemy import JSON
from sqlmodel import Column, Field, SQLModel
class ResumeStatus(str, Enum): class ResumeStatus(str, Enum):
PENDING = "pending" PENDING = "pending"
PARSING = "parsing" PARSING = "parsing"
PARSED = "parsed" PARSED = "parsed"
PARSE_FAILED = "parse_failed" PARSE_FAILED = "parse_failed"
UNDER_REVIEW = "under_review" UNDER_REVIEW = "under_review"
@ -22,19 +22,19 @@ class ResumeBase(SQLModel):
session_id: int = Field(foreign_key="session.id") session_id: int = Field(foreign_key="session.id")
applicant_name: str = Field(max_length=255) applicant_name: str = Field(max_length=255)
applicant_email: str = Field(max_length=255) applicant_email: str = Field(max_length=255)
applicant_phone: Optional[str] = Field(max_length=50) applicant_phone: str | None = Field(max_length=50)
resume_file_url: str resume_file_url: str
cover_letter: Optional[str] = None cover_letter: str | None = None
status: ResumeStatus = Field(default=ResumeStatus.PENDING) status: ResumeStatus = Field(default=ResumeStatus.PENDING)
interview_report_url: Optional[str] = None interview_report_url: str | None = None
notes: Optional[str] = None notes: str | None = None
parsed_data: Optional[dict] = Field(default=None, sa_column=Column(JSON)) parsed_data: dict | None = Field(default=None, sa_column=Column(JSON))
interview_plan: Optional[dict] = Field(default=None, sa_column=Column(JSON)) interview_plan: dict | None = Field(default=None, sa_column=Column(JSON))
parse_error: Optional[str] = None parse_error: str | None = None
class Resume(ResumeBase, table=True): class Resume(ResumeBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True) id: int | None = Field(default=None, primary_key=True)
created_at: datetime = Field(default_factory=datetime.utcnow) created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow) updated_at: datetime = Field(default_factory=datetime.utcnow)
@ -43,25 +43,25 @@ class ResumeCreate(SQLModel):
vacancy_id: int vacancy_id: int
applicant_name: str = Field(max_length=255) applicant_name: str = Field(max_length=255)
applicant_email: str = Field(max_length=255) applicant_email: str = Field(max_length=255)
applicant_phone: Optional[str] = Field(max_length=50) applicant_phone: str | None = Field(max_length=50)
resume_file_url: str resume_file_url: str
cover_letter: Optional[str] = None cover_letter: str | None = None
class ResumeUpdate(SQLModel): class ResumeUpdate(SQLModel):
applicant_name: Optional[str] = None applicant_name: str | None = None
applicant_email: Optional[str] = None applicant_email: str | None = None
applicant_phone: Optional[str] = None applicant_phone: str | None = None
cover_letter: Optional[str] = None cover_letter: str | None = None
status: Optional[ResumeStatus] = None status: ResumeStatus | None = None
interview_report_url: Optional[str] = None interview_report_url: str | None = None
notes: Optional[str] = None notes: str | None = None
parsed_data: Optional[dict] = None parsed_data: dict | None = None
interview_plan: Optional[dict] = None interview_plan: dict | None = None
parse_error: Optional[str] = None parse_error: str | None = None
class ResumeRead(ResumeBase): class ResumeRead(ResumeBase):
id: int id: int
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime

View File

@ -1,30 +1,32 @@
from sqlmodel import SQLModel, Field
from typing import Optional
from datetime import datetime, timedelta
import uuid import uuid
from datetime import datetime, timedelta
from sqlmodel import Field, SQLModel
class SessionBase(SQLModel): class SessionBase(SQLModel):
session_id: str = Field(max_length=255, unique=True, index=True) session_id: str = Field(max_length=255, unique=True, index=True)
user_agent: Optional[str] = Field(max_length=512) user_agent: str | None = Field(max_length=512)
ip_address: Optional[str] = Field(max_length=45) ip_address: str | None = Field(max_length=45)
is_active: bool = Field(default=True) is_active: bool = Field(default=True)
expires_at: datetime = Field(default_factory=lambda: datetime.utcnow() + timedelta(days=30)) expires_at: datetime = Field(
default_factory=lambda: datetime.utcnow() + timedelta(days=30)
)
last_activity: datetime = Field(default_factory=datetime.utcnow) last_activity: datetime = Field(default_factory=datetime.utcnow)
class Session(SessionBase, table=True): class Session(SessionBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True) id: int | None = Field(default=None, primary_key=True)
created_at: datetime = Field(default_factory=datetime.utcnow) created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow) updated_at: datetime = Field(default_factory=datetime.utcnow)
@classmethod @classmethod
def create_new_session(cls, user_agent: Optional[str] = None, ip_address: Optional[str] = None) -> "Session": def create_new_session(
cls, user_agent: str | None = None, ip_address: str | None = None
) -> "Session":
"""Create a new session with a unique session_id""" """Create a new session with a unique session_id"""
return cls( return cls(
session_id=str(uuid.uuid4()), session_id=str(uuid.uuid4()), user_agent=user_agent, ip_address=ip_address
user_agent=user_agent,
ip_address=ip_address
) )
def is_expired(self) -> bool: def is_expired(self) -> bool:
@ -44,4 +46,4 @@ class SessionCreate(SessionBase):
class SessionRead(SessionBase): class SessionRead(SessionBase):
id: int id: int
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime

View File

@ -1,8 +1,8 @@
from sqlmodel import SQLModel, Field
from typing import Optional, List
from datetime import datetime from datetime import datetime
from enum import Enum from enum import Enum
from sqlmodel import Field, SQLModel
class EmploymentType(str, Enum): class EmploymentType(str, Enum):
FULL_TIME = "full" FULL_TIME = "full"
@ -15,7 +15,7 @@ class EmploymentType(str, Enum):
class Experience(str, Enum): class Experience(str, Enum):
NO_EXPERIENCE = "noExperience" NO_EXPERIENCE = "noExperience"
BETWEEN_1_AND_3 = "between1And3" BETWEEN_1_AND_3 = "between1And3"
BETWEEN_3_AND_6 = "between3And6" BETWEEN_3_AND_6 = "between3And6"
MORE_THAN_6 = "moreThan6" MORE_THAN_6 = "moreThan6"
@ -30,31 +30,31 @@ class Schedule(str, Enum):
class VacancyBase(SQLModel): class VacancyBase(SQLModel):
title: str = Field(max_length=255) title: str = Field(max_length=255)
description: str description: str
key_skills: Optional[str] = None key_skills: str | None = None
employment_type: EmploymentType employment_type: EmploymentType
experience: Experience experience: Experience
schedule: Schedule schedule: Schedule
salary_from: Optional[int] = None salary_from: int | None = None
salary_to: Optional[int] = None salary_to: int | None = None
salary_currency: Optional[str] = Field(default="RUR", max_length=3) salary_currency: str | None = Field(default="RUR", max_length=3)
gross_salary: Optional[bool] = False gross_salary: bool | None = False
company_name: str = Field(max_length=255) company_name: str = Field(max_length=255)
company_description: Optional[str] = None company_description: str | None = None
area_name: str = Field(max_length=255) area_name: str = Field(max_length=255)
metro_stations: Optional[str] = None metro_stations: str | None = None
address: Optional[str] = None address: str | None = None
professional_roles: Optional[str] = None professional_roles: str | None = None
contacts_name: Optional[str] = Field(max_length=255) contacts_name: str | None = Field(max_length=255)
contacts_email: Optional[str] = Field(max_length=255) contacts_email: str | None = Field(max_length=255)
contacts_phone: Optional[str] = Field(max_length=50) contacts_phone: str | None = Field(max_length=50)
is_archived: bool = Field(default=False) is_archived: bool = Field(default=False)
premium: bool = Field(default=False) premium: bool = Field(default=False)
published_at: Optional[datetime] = Field(default_factory=datetime.utcnow) published_at: datetime | None = Field(default_factory=datetime.utcnow)
url: Optional[str] = None url: str | None = None
class Vacancy(VacancyBase, table=True): class Vacancy(VacancyBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True) id: int | None = Field(default=None, primary_key=True)
created_at: datetime = Field(default_factory=datetime.utcnow) created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow) updated_at: datetime = Field(default_factory=datetime.utcnow)
@ -64,32 +64,32 @@ class VacancyCreate(VacancyBase):
class VacancyUpdate(SQLModel): class VacancyUpdate(SQLModel):
title: Optional[str] = None title: str | None = None
description: Optional[str] = None description: str | None = None
key_skills: Optional[str] = None key_skills: str | None = None
employment_type: Optional[EmploymentType] = None employment_type: EmploymentType | None = None
experience: Optional[Experience] = None experience: Experience | None = None
schedule: Optional[Schedule] = None schedule: Schedule | None = None
salary_from: Optional[int] = None salary_from: int | None = None
salary_to: Optional[int] = None salary_to: int | None = None
salary_currency: Optional[str] = None salary_currency: str | None = None
gross_salary: Optional[bool] = None gross_salary: bool | None = None
company_name: Optional[str] = None company_name: str | None = None
company_description: Optional[str] = None company_description: str | None = None
area_name: Optional[str] = None area_name: str | None = None
metro_stations: Optional[str] = None metro_stations: str | None = None
address: Optional[str] = None address: str | None = None
professional_roles: Optional[str] = None professional_roles: str | None = None
contacts_name: Optional[str] = None contacts_name: str | None = None
contacts_email: Optional[str] = None contacts_email: str | None = None
contacts_phone: Optional[str] = None contacts_phone: str | None = None
is_archived: Optional[bool] = None is_archived: bool | None = None
premium: Optional[bool] = None premium: bool | None = None
published_at: Optional[datetime] = None published_at: datetime | None = None
url: Optional[str] = None url: str | None = None
class VacancyRead(VacancyBase): class VacancyRead(VacancyBase):
id: int id: int
created_at: datetime created_at: datetime
updated_at: datetime updated_at: datetime

View File

@ -1,5 +1,5 @@
from .vacancy_repository import VacancyRepository
from .resume_repository import ResumeRepository
from .interview_repository import InterviewRepository from .interview_repository import InterviewRepository
from .resume_repository import ResumeRepository
from .vacancy_repository import VacancyRepository
__all__ = ["VacancyRepository", "ResumeRepository", "InterviewRepository"] __all__ = ["VacancyRepository", "ResumeRepository", "InterviewRepository"]

View File

@ -1,15 +1,21 @@
from typing import TypeVar, Generic, Optional, List, Type, Annotated from typing import Annotated, Generic, TypeVar
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, update, delete
from sqlmodel import SQLModel
from fastapi import Depends from fastapi import Depends
from sqlalchemy import delete, select
from sqlalchemy.ext.asyncio import AsyncSession
from sqlmodel import SQLModel
from app.core.database import get_session from app.core.database import get_session
ModelType = TypeVar("ModelType", bound=SQLModel) ModelType = TypeVar("ModelType", bound=SQLModel)
class BaseRepository(Generic[ModelType]): class BaseRepository(Generic[ModelType]):
def __init__(self, model: Type[ModelType], session: Annotated[AsyncSession, Depends(get_session)]): def __init__(
self,
model: type[ModelType],
session: Annotated[AsyncSession, Depends(get_session)],
):
self.model = model self.model = model
self._session = session self._session = session
@ -20,29 +26,29 @@ class BaseRepository(Generic[ModelType]):
await self._session.refresh(db_obj) await self._session.refresh(db_obj)
return db_obj return db_obj
async def get(self, id: int) -> Optional[ModelType]: async def get(self, id: int) -> ModelType | None:
statement = select(self.model).where(self.model.id == id) statement = select(self.model).where(self.model.id == id)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalar_one_or_none() return result.scalar_one_or_none()
async def get_all(self, skip: int = 0, limit: int = 100) -> List[ModelType]: async def get_all(self, skip: int = 0, limit: int = 100) -> list[ModelType]:
statement = select(self.model).offset(skip).limit(limit) statement = select(self.model).offset(skip).limit(limit)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def update(self, id: int, obj_in: dict) -> Optional[ModelType]: async def update(self, id: int, obj_in: dict) -> ModelType | None:
# Получаем объект и обновляем его напрямую # Получаем объект и обновляем его напрямую
result = await self._session.execute( result = await self._session.execute(
select(self.model).where(self.model.id == id) select(self.model).where(self.model.id == id)
) )
db_obj = result.scalar_one_or_none() db_obj = result.scalar_one_or_none()
if not db_obj: if not db_obj:
return None return None
for key, value in obj_in.items(): for key, value in obj_in.items():
setattr(db_obj, key, value) setattr(db_obj, key, value)
await self._session.commit() await self._session.commit()
await self._session.refresh(db_obj) await self._session.refresh(db_obj)
return db_obj return db_obj
@ -51,4 +57,4 @@ class BaseRepository(Generic[ModelType]):
statement = delete(self.model).where(self.model.id == id) statement = delete(self.model).where(self.model.id == id)
result = await self._session.execute(statement) result = await self._session.execute(statement)
await self._session.commit() await self._session.commit()
return result.rowcount > 0 return result.rowcount > 0

View File

@ -1,24 +1,30 @@
from typing import Optional, List, Annotated
from datetime import datetime from datetime import datetime
from sqlalchemy.ext.asyncio import AsyncSession from typing import Annotated
from sqlalchemy import select, update, desc
from fastapi import Depends from fastapi import Depends
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.database import get_session from app.core.database import get_session
from app.models.interview import InterviewSession, InterviewStatus from app.models.interview import InterviewSession
from app.repositories.base_repository import BaseRepository from app.repositories.base_repository import BaseRepository
class InterviewRepository(BaseRepository[InterviewSession]): class InterviewRepository(BaseRepository[InterviewSession]):
def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]): def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]):
super().__init__(InterviewSession, session) super().__init__(InterviewSession, session)
async def get_by_room_name(self, room_name: str) -> Optional[InterviewSession]: async def get_by_room_name(self, room_name: str) -> InterviewSession | None:
"""Получить сессию интервью по имени комнаты""" """Получить сессию интервью по имени комнаты"""
statement = select(InterviewSession).where(InterviewSession.room_name == room_name) statement = select(InterviewSession).where(
InterviewSession.room_name == room_name
)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalar_one_or_none() return result.scalar_one_or_none()
async def update_status(self, session_id: int, status: str, completed_at: Optional[datetime] = None) -> bool: async def update_status(
self, session_id: int, status: str, completed_at: datetime | None = None
) -> bool:
"""Обновить статус сессии""" """Обновить статус сессии"""
try: try:
# Получаем объект и обновляем его напрямую # Получаем объект и обновляем его напрямую
@ -26,21 +32,23 @@ class InterviewRepository(BaseRepository[InterviewSession]):
select(InterviewSession).where(InterviewSession.id == session_id) select(InterviewSession).where(InterviewSession.id == session_id)
) )
session_obj = result.scalar_one_or_none() session_obj = result.scalar_one_or_none()
if not session_obj: if not session_obj:
return False return False
session_obj.status = status session_obj.status = status
if completed_at: if completed_at:
session_obj.completed_at = completed_at session_obj.completed_at = completed_at
await self._session.commit() await self._session.commit()
return True return True
except Exception: except Exception:
await self._session.rollback() await self._session.rollback()
return False return False
async def update_dialogue_history(self, room_name: str, dialogue_history: list) -> bool: async def update_dialogue_history(
self, room_name: str, dialogue_history: list
) -> bool:
"""Обновить историю диалога для сессии""" """Обновить историю диалога для сессии"""
try: try:
# Получаем объект и обновляем его напрямую # Получаем объект и обновляем его напрямую
@ -48,18 +56,20 @@ class InterviewRepository(BaseRepository[InterviewSession]):
select(InterviewSession).where(InterviewSession.room_name == room_name) select(InterviewSession).where(InterviewSession.room_name == room_name)
) )
session_obj = result.scalar_one_or_none() session_obj = result.scalar_one_or_none()
if not session_obj: if not session_obj:
return False return False
session_obj.dialogue_history = dialogue_history session_obj.dialogue_history = dialogue_history
await self._session.commit() await self._session.commit()
return True return True
except Exception: except Exception:
await self._session.rollback() await self._session.rollback()
return False return False
async def update_ai_agent_status(self, session_id: int, pid: Optional[int] = None, status: str = "not_started") -> bool: async def update_ai_agent_status(
self, session_id: int, pid: int | None = None, status: str = "not_started"
) -> bool:
"""Обновить статус AI агента""" """Обновить статус AI агента"""
try: try:
# Получаем объект и обновляем его напрямую # Получаем объект и обновляем его напрямую
@ -67,10 +77,10 @@ class InterviewRepository(BaseRepository[InterviewSession]):
select(InterviewSession).where(InterviewSession.id == session_id) select(InterviewSession).where(InterviewSession.id == session_id)
) )
session_obj = result.scalar_one_or_none() session_obj = result.scalar_one_or_none()
if not session_obj: if not session_obj:
return False return False
session_obj.ai_agent_pid = pid session_obj.ai_agent_pid = pid
session_obj.ai_agent_status = status session_obj.ai_agent_status = status
await self._session.commit() await self._session.commit()
@ -78,8 +88,8 @@ class InterviewRepository(BaseRepository[InterviewSession]):
except Exception: except Exception:
await self._session.rollback() await self._session.rollback()
return False return False
async def get_sessions_with_running_agents(self) -> List[InterviewSession]: async def get_sessions_with_running_agents(self) -> list[InterviewSession]:
"""Получить сессии с запущенными AI агентами""" """Получить сессии с запущенными AI агентами"""
statement = select(InterviewSession).where( statement = select(InterviewSession).where(
InterviewSession.ai_agent_status == "running" InterviewSession.ai_agent_status == "running"
@ -87,7 +97,9 @@ class InterviewRepository(BaseRepository[InterviewSession]):
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def get_active_session_by_resume_id(self, resume_id: int) -> Optional[InterviewSession]: async def get_active_session_by_resume_id(
self, resume_id: int
) -> InterviewSession | None:
"""Получить активную сессию собеседования для резюме""" """Получить активную сессию собеседования для резюме"""
statement = ( statement = (
select(InterviewSession) select(InterviewSession)
@ -98,13 +110,13 @@ class InterviewRepository(BaseRepository[InterviewSession]):
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalar_one_or_none() return result.scalar_one_or_none()
async def create_interview_session(self, resume_id: int, room_name: str) -> InterviewSession: async def create_interview_session(
self, resume_id: int, room_name: str
) -> InterviewSession:
"""Создать новую сессию интервью""" """Создать новую сессию интервью"""
from app.models.interview import InterviewSessionCreate from app.models.interview import InterviewSessionCreate
session_data = InterviewSessionCreate(
resume_id=resume_id, session_data = InterviewSessionCreate(resume_id=resume_id, room_name=room_name)
room_name=room_name
)
return await self.create(session_data.model_dump()) return await self.create(session_data.model_dump())
async def update_session_status(self, session_id: int, status: str) -> bool: async def update_session_status(self, session_id: int, status: str) -> bool:
@ -112,4 +124,4 @@ class InterviewRepository(BaseRepository[InterviewSession]):
completed_at = None completed_at = None
if status == "completed": if status == "completed":
completed_at = datetime.utcnow() completed_at = datetime.utcnow()
return await self.update_status(session_id, status, completed_at) return await self.update_status(session_id, status, completed_at)

View File

@ -1,9 +1,12 @@
from typing import List, Optional, Annotated from typing import Annotated
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from fastapi import Depends from fastapi import Depends
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.database import get_session from app.core.database import get_session
from app.models.resume import Resume, ResumeStatus from app.models.resume import Resume, ResumeStatus
from .base_repository import BaseRepository from .base_repository import BaseRepository
@ -11,45 +14,49 @@ class ResumeRepository(BaseRepository[Resume]):
def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]): def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]):
super().__init__(Resume, session) super().__init__(Resume, session)
async def get_by_vacancy_id(self, vacancy_id: int) -> List[Resume]: async def get_by_vacancy_id(self, vacancy_id: int) -> list[Resume]:
statement = select(Resume).where(Resume.vacancy_id == vacancy_id) statement = select(Resume).where(Resume.vacancy_id == vacancy_id)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def get_by_status(self, status: ResumeStatus) -> List[Resume]: async def get_by_status(self, status: ResumeStatus) -> list[Resume]:
statement = select(Resume).where(Resume.status == status) statement = select(Resume).where(Resume.status == status)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def get_by_id(self, resume_id: int) -> Optional[Resume]: async def get_by_id(self, resume_id: int) -> Resume | None:
"""Получить резюме по ID""" """Получить резюме по ID"""
return await self.get(resume_id) return await self.get(resume_id)
async def create_with_session(self, resume_dict: dict, session_id: int) -> Resume: async def create_with_session(self, resume_dict: dict, session_id: int) -> Resume:
"""Создать резюме с привязкой к сессии""" """Создать резюме с привязкой к сессии"""
resume_dict['session_id'] = session_id resume_dict["session_id"] = session_id
return await self.create(resume_dict) return await self.create(resume_dict)
async def get_by_session_id(self, session_id: int) -> List[Resume]: async def get_by_session_id(self, session_id: int) -> list[Resume]:
"""Получить резюме по session_id""" """Получить резюме по session_id"""
statement = select(Resume).where(Resume.session_id == session_id) statement = select(Resume).where(Resume.session_id == session_id)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def get_by_vacancy_and_session(self, vacancy_id: int, session_id: int) -> List[Resume]: async def get_by_vacancy_and_session(
self, vacancy_id: int, session_id: int
) -> list[Resume]:
"""Получить резюме по vacancy_id и session_id""" """Получить резюме по vacancy_id и session_id"""
statement = select(Resume).where( statement = select(Resume).where(
Resume.vacancy_id == vacancy_id, Resume.vacancy_id == vacancy_id, Resume.session_id == session_id
Resume.session_id == session_id
) )
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def update_status(self, resume_id: int, status: ResumeStatus) -> Optional[Resume]: async def update_status(
self, resume_id: int, status: ResumeStatus
) -> Resume | None:
"""Обновить статус резюме""" """Обновить статус резюме"""
return await self.update(resume_id, {"status": status}) return await self.update(resume_id, {"status": status})
async def add_interview_report(self, resume_id: int, report_url: str) -> Optional[Resume]: async def add_interview_report(
self, resume_id: int, report_url: str
) -> Resume | None:
"""Добавить ссылку на отчет интервью""" """Добавить ссылку на отчет интервью"""
return await self.update(resume_id, {"interview_report_url": report_url}) return await self.update(resume_id, {"interview_report_url": report_url})

View File

@ -1,30 +1,36 @@
from typing import Optional, Annotated from datetime import datetime
from sqlalchemy.ext.asyncio import AsyncSession from typing import Annotated
from sqlalchemy import select
from fastapi import Depends from fastapi import Depends
from sqlalchemy import select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.database import get_session from app.core.database import get_session
from app.models.session import Session from app.models.session import Session
from app.repositories.base_repository import BaseRepository from app.repositories.base_repository import BaseRepository
from datetime import datetime
class SessionRepository(BaseRepository[Session]): class SessionRepository(BaseRepository[Session]):
def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]): def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]):
super().__init__(Session, session) super().__init__(Session, session)
async def get_by_session_id(self, session_id: str) -> Optional[Session]: async def get_by_session_id(self, session_id: str) -> Session | None:
"""Get session by session_id""" """Get session by session_id"""
statement = select(Session).where( statement = select(Session).where(
Session.session_id == session_id, Session.session_id == session_id,
Session.is_active == True, Session.is_active == True,
Session.expires_at > datetime.utcnow() Session.expires_at > datetime.utcnow(),
) )
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalar_one_or_none() return result.scalar_one_or_none()
async def create_session(self, user_agent: Optional[str] = None, ip_address: Optional[str] = None) -> Session: async def create_session(
self, user_agent: str | None = None, ip_address: str | None = None
) -> Session:
"""Create a new session""" """Create a new session"""
new_session = Session.create_new_session(user_agent=user_agent, ip_address=ip_address) new_session = Session.create_new_session(
user_agent=user_agent, ip_address=ip_address
)
return await self.create(new_session) return await self.create(new_session)
async def deactivate_session(self, session_id: str) -> bool: async def deactivate_session(self, session_id: str) -> bool:
@ -56,11 +62,11 @@ class SessionRepository(BaseRepository[Session]):
statement = select(Session).where(Session.expires_at < datetime.utcnow()) statement = select(Session).where(Session.expires_at < datetime.utcnow())
result = await self._session.execute(statement) result = await self._session.execute(statement)
expired_sessions = result.scalars().all() expired_sessions = result.scalars().all()
count = 0 count = 0
for session in expired_sessions: for session in expired_sessions:
await self._session.delete(session) await self._session.delete(session)
count += 1 count += 1
await self._session.commit() await self._session.commit()
return count return count

View File

@ -1,9 +1,12 @@
from typing import List, Optional, Annotated from typing import Annotated
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, and_
from fastapi import Depends from fastapi import Depends
from sqlalchemy import and_, select
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.database import get_session from app.core.database import get_session
from app.models.vacancy import Vacancy, VacancyCreate, VacancyUpdate from app.models.vacancy import Vacancy
from .base_repository import BaseRepository from .base_repository import BaseRepository
@ -11,12 +14,12 @@ class VacancyRepository(BaseRepository[Vacancy]):
def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]): def __init__(self, session: Annotated[AsyncSession, Depends(get_session)]):
super().__init__(Vacancy, session) super().__init__(Vacancy, session)
async def get_by_company(self, company_name: str) -> List[Vacancy]: async def get_by_company(self, company_name: str) -> list[Vacancy]:
statement = select(Vacancy).where(Vacancy.company_name == company_name) statement = select(Vacancy).where(Vacancy.company_name == company_name)
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def get_active(self, skip: int = 0, limit: int = 100) -> List[Vacancy]: async def get_active(self, skip: int = 0, limit: int = 100) -> list[Vacancy]:
statement = ( statement = (
select(Vacancy) select(Vacancy)
.where(Vacancy.is_archived == False) .where(Vacancy.is_archived == False)
@ -28,12 +31,12 @@ class VacancyRepository(BaseRepository[Vacancy]):
async def search( async def search(
self, self,
title: Optional[str] = None, title: str | None = None,
company_name: Optional[str] = None, company_name: str | None = None,
area_name: Optional[str] = None, area_name: str | None = None,
skip: int = 0, skip: int = 0,
limit: int = 100 limit: int = 100,
) -> List[Vacancy]: ) -> list[Vacancy]:
"""Поиск вакансий по критериям""" """Поиск вакансий по критериям"""
statement = select(Vacancy) statement = select(Vacancy)
conditions = [] conditions = []
@ -52,28 +55,29 @@ class VacancyRepository(BaseRepository[Vacancy]):
result = await self._session.execute(statement) result = await self._session.execute(statement)
return result.scalars().all() return result.scalars().all()
async def get_active_vacancies(
async def get_active_vacancies(self, skip: int = 0, limit: int = 100) -> List[Vacancy]: self, skip: int = 0, limit: int = 100
) -> list[Vacancy]:
"""Получить активные вакансии (алиас для get_active)""" """Получить активные вакансии (алиас для get_active)"""
return await self.get_active(skip=skip, limit=limit) return await self.get_active(skip=skip, limit=limit)
async def search_vacancies( async def search_vacancies(
self, self,
title: Optional[str] = None, title: str | None = None,
company_name: Optional[str] = None, company_name: str | None = None,
area_name: Optional[str] = None, area_name: str | None = None,
skip: int = 0, skip: int = 0,
limit: int = 100 limit: int = 100,
) -> List[Vacancy]: ) -> list[Vacancy]:
"""Поиск вакансий (алиас для search)""" """Поиск вакансий (алиас для search)"""
return await self.search( return await self.search(
title=title, title=title,
company_name=company_name, company_name=company_name,
area_name=area_name, area_name=area_name,
skip=skip, skip=skip,
limit=limit limit=limit,
) )
async def archive(self, vacancy_id: int) -> Optional[Vacancy]: async def archive(self, vacancy_id: int) -> Vacancy | None:
"""Архивировать вакансию""" """Архивировать вакансию"""
return await self.update(vacancy_id, {"is_active": False}) return await self.update(vacancy_id, {"is_active": False})

View File

@ -1,4 +1,4 @@
from .vacancy_router import router as vacancy_router
from .resume_router import router as resume_router from .resume_router import router as resume_router
from .vacancy_router import router as vacancy_router
__all__ = ["vacancy_router", "resume_router"] __all__ = ["vacancy_router", "resume_router"]

View File

@ -1,79 +1,118 @@
from fastapi import APIRouter, Depends, HTTPException from fastapi import APIRouter, Depends, HTTPException
from app.services.admin_service import AdminService from app.services.admin_service import AdminService
from typing import Dict from app.services.agent_manager import agent_manager
router = APIRouter(prefix="/admin", tags=["Admin"]) router = APIRouter(prefix="/admin", tags=["Admin"])
@router.get("/interview-processes") @router.get("/interview-processes")
async def list_active_interview_processes( async def list_active_interview_processes(
admin_service: AdminService = Depends(AdminService) admin_service: AdminService = Depends(AdminService),
) -> Dict: ) -> dict:
"""Список всех активных AI процессов интервью""" """Список всех активных AI процессов интервью"""
return await admin_service.get_active_interview_processes() return await admin_service.get_active_interview_processes()
@router.post("/interview-processes/{session_id}/stop") @router.post("/interview-processes/{session_id}/stop")
async def stop_interview_process( async def stop_interview_process(
session_id: int, session_id: int, admin_service: AdminService = Depends(AdminService)
admin_service: AdminService = Depends(AdminService) ) -> dict:
) -> Dict:
"""Остановить AI процесс для конкретного интервью""" """Остановить AI процесс для конкретного интервью"""
result = await admin_service.stop_interview_process(session_id) result = await admin_service.stop_interview_process(session_id)
if not result["success"]: if not result["success"]:
raise HTTPException(status_code=404, detail=result["message"]) raise HTTPException(status_code=404, detail=result["message"])
return result return result
@router.post("/interview-processes/cleanup") @router.post("/interview-processes/cleanup")
async def cleanup_dead_processes( async def cleanup_dead_processes(
admin_service: AdminService = Depends(AdminService) admin_service: AdminService = Depends(AdminService),
) -> Dict: ) -> dict:
"""Очистка мертвых процессов""" """Очистка мертвых процессов"""
return await admin_service.cleanup_dead_processes() return await admin_service.cleanup_dead_processes()
@router.get("/system-stats") @router.get("/system-stats")
async def get_system_stats( async def get_system_stats(admin_service: AdminService = Depends(AdminService)) -> dict:
admin_service: AdminService = Depends(AdminService)
) -> Dict:
"""Общая статистика системы""" """Общая статистика системы"""
result = await admin_service.get_system_stats() result = await admin_service.get_system_stats()
if "error" in result: if "error" in result:
raise HTTPException(status_code=500, detail=result["error"]) raise HTTPException(status_code=500, detail=result["error"])
return result return result
@router.get("/agent/status")
async def get_agent_status() -> dict:
"""Статус AI агента"""
return {"agent": agent_manager.get_status()}
@router.post("/agent/start")
async def start_agent() -> dict:
"""Запуск AI агента"""
success = await agent_manager.start_agent()
if success:
return {"success": True, "message": "AI Agent started successfully"}
else:
raise HTTPException(status_code=500, detail="Failed to start AI Agent")
@router.post("/agent/stop")
async def stop_agent() -> dict:
"""Остановка AI агента"""
success = await agent_manager.stop_agent()
if success:
return {"success": True, "message": "AI Agent stopped successfully"}
else:
raise HTTPException(status_code=500, detail="Failed to stop AI Agent")
@router.post("/agent/restart")
async def restart_agent() -> dict:
"""Перезапуск AI агента"""
# Сначала останавливаем
await agent_manager.stop_agent()
# Затем запускаем
success = await agent_manager.start_agent()
if success:
return {"success": True, "message": "AI Agent restarted successfully"}
else:
raise HTTPException(status_code=500, detail="Failed to restart AI Agent")
@router.get("/analytics/dashboard") @router.get("/analytics/dashboard")
async def get_analytics_dashboard( async def get_analytics_dashboard(
admin_service: AdminService = Depends(AdminService) admin_service: AdminService = Depends(AdminService),
) -> Dict: ) -> dict:
"""Основная аналитическая панель""" """Основная аналитическая панель"""
return await admin_service.get_analytics_dashboard() return await admin_service.get_analytics_dashboard()
@router.get("/analytics/candidates/{vacancy_id}") @router.get("/analytics/candidates/{vacancy_id}")
async def get_vacancy_analytics( async def get_vacancy_analytics(
vacancy_id: int, vacancy_id: int, admin_service: AdminService = Depends(AdminService)
admin_service: AdminService = Depends(AdminService) ) -> dict:
) -> Dict:
"""Аналитика кандидатов по конкретной вакансии""" """Аналитика кандидатов по конкретной вакансии"""
return await admin_service.get_vacancy_analytics(vacancy_id) return await admin_service.get_vacancy_analytics(vacancy_id)
@router.post("/analytics/generate-reports/{vacancy_id}") @router.post("/analytics/generate-reports/{vacancy_id}")
async def generate_reports_for_vacancy( async def generate_reports_for_vacancy(
vacancy_id: int, vacancy_id: int, admin_service: AdminService = Depends(AdminService)
admin_service: AdminService = Depends(AdminService) ) -> dict:
) -> Dict:
"""Запустить генерацию отчетов для всех кандидатов вакансии""" """Запустить генерацию отчетов для всех кандидатов вакансии"""
result = await admin_service.generate_reports_for_vacancy(vacancy_id) result = await admin_service.generate_reports_for_vacancy(vacancy_id)
if "error" in result: if "error" in result:
raise HTTPException(status_code=404, detail=result["error"]) raise HTTPException(status_code=404, detail=result["error"])
return result return result

View File

@ -1,20 +1,18 @@
# -*- coding: utf-8 -*- from fastapi import APIRouter, BackgroundTasks, Depends, HTTPException
from typing import List
from fastapi import APIRouter, Depends, HTTPException, BackgroundTasks
from pydantic import BaseModel from pydantic import BaseModel
from app.repositories.resume_repository import ResumeRepository from app.repositories.resume_repository import ResumeRepository
from celery_worker.interview_analysis_task import generate_interview_report, analyze_multiple_candidates from celery_worker.interview_analysis_task import (
analyze_multiple_candidates,
generate_interview_report,
router = APIRouter(
prefix="/analysis",
tags=["analysis"]
) )
router = APIRouter(prefix="/analysis", tags=["analysis"])
class AnalysisResponse(BaseModel): class AnalysisResponse(BaseModel):
"""Ответ запуска задачи анализа""" """Ответ запуска задачи анализа"""
message: str message: str
resume_id: int resume_id: int
task_id: str task_id: str
@ -22,11 +20,13 @@ class AnalysisResponse(BaseModel):
class BulkAnalysisRequest(BaseModel): class BulkAnalysisRequest(BaseModel):
"""Запрос массового анализа""" """Запрос массового анализа"""
resume_ids: List[int]
resume_ids: list[int]
class BulkAnalysisResponse(BaseModel): class BulkAnalysisResponse(BaseModel):
"""Ответ массового анализа""" """Ответ массового анализа"""
message: str message: str
resume_count: int resume_count: int
task_id: str task_id: str
@ -34,6 +34,7 @@ class BulkAnalysisResponse(BaseModel):
class CandidateRanking(BaseModel): class CandidateRanking(BaseModel):
"""Рейтинг кандидата""" """Рейтинг кандидата"""
resume_id: int resume_id: int
candidate_name: str candidate_name: str
overall_score: int overall_score: int
@ -45,32 +46,30 @@ class CandidateRanking(BaseModel):
async def start_interview_analysis( async def start_interview_analysis(
resume_id: int, resume_id: int,
background_tasks: BackgroundTasks, background_tasks: BackgroundTasks,
resume_repo: ResumeRepository = Depends(ResumeRepository) resume_repo: ResumeRepository = Depends(ResumeRepository),
): ):
""" """
Запускает анализ интервью для конкретного кандидата Запускает анализ интервью для конкретного кандидата
Анализирует: Анализирует:
- Соответствие резюме вакансии - Соответствие резюме вакансии
- Качество ответов в диалоге интервью - Качество ответов в диалоге интервью
- Технические навыки и опыт - Технические навыки и опыт
- Коммуникативные способности - Коммуникативные способности
- Общую рекомендацию и рейтинг - Общую рекомендацию и рейтинг
""" """
# Проверяем, существует ли резюме # Проверяем, существует ли резюме
resume = await resume_repo.get_by_id(resume_id) resume = await resume_repo.get_by_id(resume_id)
if not resume: if not resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Запускаем задачу анализа # Запускаем задачу анализа
task = generate_interview_report.delay(resume_id) task = generate_interview_report.delay(resume_id)
return AnalysisResponse( return AnalysisResponse(
message="Interview analysis started", message="Interview analysis started", resume_id=resume_id, task_id=task.id
resume_id=resume_id,
task_id=task.id
) )
@ -78,89 +77,87 @@ async def start_interview_analysis(
async def start_bulk_analysis( async def start_bulk_analysis(
request: BulkAnalysisRequest, request: BulkAnalysisRequest,
background_tasks: BackgroundTasks, background_tasks: BackgroundTasks,
resume_repo: ResumeRepository = Depends(ResumeRepository) resume_repo: ResumeRepository = Depends(ResumeRepository),
): ):
""" """
Запускает массовый анализ нескольких кандидатов Запускает массовый анализ нескольких кандидатов
Возвращает ранжированный список кандидатов по общему баллу Возвращает ранжированный список кандидатов по общему баллу
Полезно для сравнения кандидатов на одну позицию Полезно для сравнения кандидатов на одну позицию
""" """
# Проверяем, что все резюме существуют # Проверяем, что все резюме существуют
existing_resumes = [] existing_resumes = []
for resume_id in request.resume_ids: for resume_id in request.resume_ids:
resume = await resume_repo.get_by_id(resume_id) resume = await resume_repo.get_by_id(resume_id)
if resume: if resume:
existing_resumes.append(resume_id) existing_resumes.append(resume_id)
if not existing_resumes: if not existing_resumes:
raise HTTPException(status_code=404, detail="No valid resumes found") raise HTTPException(status_code=404, detail="No valid resumes found")
# Запускаем задачу массового анализа # Запускаем задачу массового анализа
task = analyze_multiple_candidates.delay(existing_resumes) task = analyze_multiple_candidates.delay(existing_resumes)
return BulkAnalysisResponse( return BulkAnalysisResponse(
message="Bulk analysis started", message="Bulk analysis started",
resume_count=len(existing_resumes), resume_count=len(existing_resumes),
task_id=task.id task_id=task.id,
) )
@router.get("/ranking/{vacancy_id}") @router.get("/ranking/{vacancy_id}")
async def get_candidates_ranking( async def get_candidates_ranking(
vacancy_id: int, vacancy_id: int, resume_repo: ResumeRepository = Depends(ResumeRepository)
resume_repo: ResumeRepository = Depends(ResumeRepository)
): ):
""" """
Получить ранжированный список кандидатов для вакансии Получить ранжированный список кандидатов для вакансии
Сортирует кандидатов по результатам анализа интервью Сортирует кандидатов по результатам анализа интервью
Показывает только тех, кто прошел интервью Показывает только тех, кто прошел интервью
""" """
# Получаем все резюме для вакансии со статусом "interviewed" # Получаем все резюме для вакансии со статусом "interviewed"
resumes = await resume_repo.get_by_vacancy_id(vacancy_id) resumes = await resume_repo.get_by_vacancy_id(vacancy_id)
interviewed_resumes = [r for r in resumes if r.status in ["interviewed"]] interviewed_resumes = [r for r in resumes if r.status in ["interviewed"]]
if not interviewed_resumes: if not interviewed_resumes:
return { return {
"vacancy_id": vacancy_id, "vacancy_id": vacancy_id,
"candidates": [], "candidates": [],
"message": "No interviewed candidates found" "message": "No interviewed candidates found",
} }
# Запускаем массовый анализ если еще не было # Запускаем массовый анализ если еще не было
resume_ids = [r.id for r in interviewed_resumes] resume_ids = [r.id for r in interviewed_resumes]
task = analyze_multiple_candidates.delay(resume_ids) task = analyze_multiple_candidates.delay(resume_ids)
# В реальности здесь нужно дождаться выполнения или получить из кэша # В реальности здесь нужно дождаться выполнения или получить из кэша
# Пока возвращаем информацию о запущенной задаче # Пока возвращаем информацию о запущенной задаче
return { return {
"vacancy_id": vacancy_id, "vacancy_id": vacancy_id,
"task_id": task.id, "task_id": task.id,
"message": f"Analysis started for {len(resume_ids)} candidates", "message": f"Analysis started for {len(resume_ids)} candidates",
"resume_ids": resume_ids "resume_ids": resume_ids,
} }
@router.get("/report/{resume_id}") @router.get("/report/{resume_id}")
async def get_interview_report( async def get_interview_report(
resume_id: int, resume_id: int, resume_repo: ResumeRepository = Depends(ResumeRepository)
resume_repo: ResumeRepository = Depends(ResumeRepository)
): ):
""" """
Получить готовый отчет анализа интервью Получить готовый отчет анализа интервью
Если отчет еще не готов - запускает анализ Если отчет еще не готов - запускает анализ
""" """
resume = await resume_repo.get_by_id(resume_id) resume = await resume_repo.get_by_id(resume_id)
if not resume: if not resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Проверяем, есть ли уже готовый отчет в notes # Проверяем, есть ли уже готовый отчет в notes
if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes: if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes:
return { return {
@ -168,39 +165,45 @@ async def get_interview_report(
"candidate_name": resume.applicant_name, "candidate_name": resume.applicant_name,
"status": "completed", "status": "completed",
"report_summary": resume.notes, "report_summary": resume.notes,
"message": "Report available" "message": "Report available",
} }
# Если отчета нет - запускаем анализ # Если отчета нет - запускаем анализ
task = generate_interview_report.delay(resume_id) task = generate_interview_report.delay(resume_id)
return { return {
"resume_id": resume_id, "resume_id": resume_id,
"candidate_name": resume.applicant_name, "candidate_name": resume.applicant_name,
"status": "in_progress", "status": "in_progress",
"task_id": task.id, "task_id": task.id,
"message": "Analysis started, check back later" "message": "Analysis started, check back later",
} }
@router.get("/statistics/{vacancy_id}") @router.get("/statistics/{vacancy_id}")
async def get_analysis_statistics( async def get_analysis_statistics(
vacancy_id: int, vacancy_id: int, resume_repo: ResumeRepository = Depends(ResumeRepository)
resume_repo: ResumeRepository = Depends(ResumeRepository)
): ):
""" """
Получить статистику анализа кандидатов по вакансии Получить статистику анализа кандидатов по вакансии
""" """
resumes = await resume_repo.get_by_vacancy_id(vacancy_id) resumes = await resume_repo.get_by_vacancy_id(vacancy_id)
total_candidates = len(resumes) total_candidates = len(resumes)
interviewed = len([r for r in resumes if r.status == "interviewed"]) interviewed = len([r for r in resumes if r.status == "interviewed"])
with_reports = len([r for r in resumes if r.notes and "ОЦЕНКА КАНДИДАТА" in r.notes]) with_reports = len(
[r for r in resumes if r.notes and "ОЦЕНКА КАНДИДАТА" in r.notes]
)
# Подсчитываем рекомендации из notes (упрощенно) # Подсчитываем рекомендации из notes (упрощенно)
recommendations = {"strongly_recommend": 0, "recommend": 0, "consider": 0, "reject": 0} recommendations = {
"strongly_recommend": 0,
"recommend": 0,
"consider": 0,
"reject": 0,
}
for resume in resumes: for resume in resumes:
if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes: if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes:
notes = resume.notes.lower() notes = resume.notes.lower()
@ -212,7 +215,7 @@ async def get_analysis_statistics(
recommendations["consider"] += 1 recommendations["consider"] += 1
elif "reject" in notes: elif "reject" in notes:
recommendations["reject"] += 1 recommendations["reject"] += 1
return { return {
"vacancy_id": vacancy_id, "vacancy_id": vacancy_id,
"statistics": { "statistics": {
@ -220,6 +223,8 @@ async def get_analysis_statistics(
"interviewed_candidates": interviewed, "interviewed_candidates": interviewed,
"analyzed_candidates": with_reports, "analyzed_candidates": with_reports,
"recommendations": recommendations, "recommendations": recommendations,
"analysis_completion": round((with_reports / max(interviewed, 1)) * 100, 1) if interviewed > 0 else 0 "analysis_completion": round((with_reports / max(interviewed, 1)) * 100, 1)
} if interviewed > 0
} else 0,
},
}

View File

@ -1,34 +1,40 @@
from fastapi import APIRouter, Depends, HTTPException, Request from fastapi import APIRouter, Depends, HTTPException, Request
from app.core.session_middleware import get_current_session from app.core.session_middleware import get_current_session
from app.models.interview import InterviewValidationResponse, LiveKitTokenResponse
from app.models.session import Session from app.models.session import Session
from app.models.interview import InterviewValidationResponse, LiveKitTokenResponse, InterviewStatus
from app.services.interview_service import InterviewRoomService from app.services.interview_service import InterviewRoomService
router = APIRouter(prefix="/interview", tags=["interview"]) router = APIRouter(prefix="/interview", tags=["interview"])
@router.get("/{resume_id}/validate-interview", response_model=InterviewValidationResponse) @router.get(
"/{resume_id}/validate-interview", response_model=InterviewValidationResponse
)
async def validate_interview( async def validate_interview(
request: Request, request: Request,
resume_id: int, resume_id: int,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
interview_service: InterviewRoomService = Depends(InterviewRoomService) interview_service: InterviewRoomService = Depends(InterviewRoomService),
): ):
"""Валидация резюме для проведения собеседования""" """Валидация резюме для проведения собеседования"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
# Проверяем валидность резюме для собеседования # Проверяем валидность резюме для собеседования
validation_result = await interview_service.validate_resume_for_interview(resume_id) validation_result = await interview_service.validate_resume_for_interview(resume_id)
# Если резюме не найдено, возвращаем 404 # Если резюме не найдено, возвращаем 404
if "not found" in validation_result.message.lower(): if "not found" in validation_result.message.lower():
raise HTTPException(status_code=404, detail=validation_result.message) raise HTTPException(status_code=404, detail=validation_result.message)
# Если резюме не готово, возвращаем 400 # Если резюме не готово, возвращаем 400
if not validation_result.can_interview and "not ready" in validation_result.message.lower(): if (
not validation_result.can_interview
and "not ready" in validation_result.message.lower()
):
raise HTTPException(status_code=400, detail=validation_result.message) raise HTTPException(status_code=400, detail=validation_result.message)
return validation_result return validation_result
@ -37,21 +43,21 @@ async def get_interview_token(
request: Request, request: Request,
resume_id: int, resume_id: int,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
interview_service: InterviewRoomService = Depends(InterviewRoomService) interview_service: InterviewRoomService = Depends(InterviewRoomService),
): ):
"""Получение токена для LiveKit собеседования""" """Получение токена для LiveKit собеседования"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
# Получаем токен для LiveKit # Получаем токен для LiveKit
token_response = await interview_service.get_livekit_token(resume_id) token_response = await interview_service.get_livekit_token(resume_id)
if not token_response: if not token_response:
raise HTTPException( raise HTTPException(
status_code=400, status_code=400,
detail="Cannot create interview session. Check if resume is ready for interview." detail="Cannot create interview session. Check if resume is ready for interview.",
) )
return token_response return token_response
@ -60,25 +66,24 @@ async def end_interview(
request: Request, request: Request,
resume_id: int, resume_id: int,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
interview_service: InterviewRoomService = Depends(InterviewRoomService) interview_service: InterviewRoomService = Depends(InterviewRoomService),
): ):
"""Завершение собеседования""" """Завершение собеседования"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
# Получаем активную сессию собеседования # Получаем активную сессию собеседования
interview_session = await interview_service.get_interview_session(resume_id) interview_session = await interview_service.get_interview_session(resume_id)
if not interview_session: if not interview_session:
raise HTTPException(status_code=404, detail="No active interview session found") raise HTTPException(status_code=404, detail="No active interview session found")
# Завершаем сессию # Завершаем сессию
success = await interview_service.update_session_status( success = await interview_service.update_session_status(
interview_session.id, interview_session.id, "completed"
"completed"
) )
if not success: if not success:
raise HTTPException(status_code=500, detail="Failed to end interview session") raise HTTPException(status_code=500, detail="Failed to end interview session")
return {"message": "Interview session ended successfully"} return {"message": "Interview session ended successfully"}

View File

@ -1,12 +1,21 @@
from fastapi import APIRouter, Depends, HTTPException, Query, UploadFile, File, Form, Request from fastapi import (
from typing import List, Optional APIRouter,
Depends,
File,
Form,
HTTPException,
Query,
Request,
UploadFile,
)
from app.core.session_middleware import get_current_session from app.core.session_middleware import get_current_session
from app.models.resume import ResumeCreate, ResumeUpdate, ResumeRead, ResumeStatus from app.models.resume import ResumeCreate, ResumeRead, ResumeStatus, ResumeUpdate
from app.models.session import Session from app.models.session import Session
from app.services.resume_service import ResumeService
from app.services.file_service import FileService from app.services.file_service import FileService
from celery_worker.tasks import parse_resume_task from app.services.resume_service import ResumeService
from celery_worker.celery_app import celery_app from celery_worker.celery_app import celery_app
from celery_worker.tasks import parse_resume_task
router = APIRouter(prefix="/resumes", tags=["resumes"]) router = APIRouter(prefix="/resumes", tags=["resumes"])
@ -17,71 +26,77 @@ async def create_resume(
vacancy_id: int = Form(...), vacancy_id: int = Form(...),
applicant_name: str = Form(...), applicant_name: str = Form(...),
applicant_email: str = Form(...), applicant_email: str = Form(...),
applicant_phone: Optional[str] = Form(None), applicant_phone: str | None = Form(None),
cover_letter: Optional[str] = Form(None), cover_letter: str | None = Form(None),
resume_file: UploadFile = File(...), resume_file: UploadFile = File(...),
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
resume_service: ResumeService = Depends(ResumeService) resume_service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
file_service = FileService() file_service = FileService()
upload_result = await file_service.upload_resume_file(resume_file) upload_result = await file_service.upload_resume_file(resume_file)
if not upload_result: if not upload_result:
raise HTTPException(status_code=400, detail="Failed to upload resume file") raise HTTPException(status_code=400, detail="Failed to upload resume file")
resume_file_url, local_file_path = upload_result resume_file_url, local_file_path = upload_result
resume_data = ResumeCreate( resume_data = ResumeCreate(
vacancy_id=vacancy_id, vacancy_id=vacancy_id,
applicant_name=applicant_name, applicant_name=applicant_name,
applicant_email=applicant_email, applicant_email=applicant_email,
applicant_phone=applicant_phone, applicant_phone=applicant_phone,
resume_file_url=resume_file_url, resume_file_url=resume_file_url,
cover_letter=cover_letter cover_letter=cover_letter,
) )
# Создаем резюме в БД # Создаем резюме в БД
created_resume = await resume_service.create_resume_with_session(resume_data, current_session.id) created_resume = await resume_service.create_resume_with_session(
resume_data, current_session.id
)
# Запускаем асинхронную задачу парсинга резюме # Запускаем асинхронную задачу парсинга резюме
try: try:
# Запускаем Celery task для парсинга с локальным файлом # Запускаем Celery task для парсинга с локальным файлом
task_result = parse_resume_task.delay(str(created_resume.id), local_file_path) task_result = parse_resume_task.delay(str(created_resume.id), local_file_path)
# Добавляем task_id в ответ для отслеживания статуса # Добавляем task_id в ответ для отслеживания статуса
response_data = created_resume.model_dump() response_data = created_resume.model_dump()
response_data["parsing_task_id"] = task_result.id response_data["parsing_task_id"] = task_result.id
response_data["parsing_status"] = "started" response_data["parsing_status"] = "started"
return response_data return response_data
except Exception as e: except Exception as e:
# Если не удалось запустить парсинг, оставляем резюме в статусе PENDING # Если не удалось запустить парсинг, оставляем резюме в статусе PENDING
print(f"Failed to start parsing task for resume {created_resume.id}: {str(e)}") print(f"Failed to start parsing task for resume {created_resume.id}: {str(e)}")
return created_resume return created_resume
@router.get("/", response_model=List[ResumeRead]) @router.get("/", response_model=list[ResumeRead])
async def get_resumes( async def get_resumes(
request: Request, request: Request,
skip: int = Query(0, ge=0), skip: int = Query(0, ge=0),
limit: int = Query(100, ge=1, le=1000), limit: int = Query(100, ge=1, le=1000),
vacancy_id: Optional[int] = Query(None), vacancy_id: int | None = Query(None),
status: Optional[ResumeStatus] = Query(None), status: ResumeStatus | None = Query(None),
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
service: ResumeService = Depends(ResumeService) service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
# Получаем только резюме текущего пользователя # Получаем только резюме текущего пользователя
if vacancy_id: if vacancy_id:
return await service.get_resumes_by_vacancy_and_session(vacancy_id, current_session.id) return await service.get_resumes_by_vacancy_and_session(
vacancy_id, current_session.id
return await service.get_resumes_by_session(current_session.id, skip=skip, limit=limit) )
return await service.get_resumes_by_session(
current_session.id, skip=skip, limit=limit
)
@router.get("/{resume_id}", response_model=ResumeRead) @router.get("/{resume_id}", response_model=ResumeRead)
@ -89,19 +104,19 @@ async def get_resume(
request: Request, request: Request,
resume_id: int, resume_id: int,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
service: ResumeService = Depends(ResumeService) service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
resume = await service.get_resume(resume_id) resume = await service.get_resume(resume_id)
if not resume: if not resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Проверяем, что резюме принадлежит текущей сессии # Проверяем, что резюме принадлежит текущей сессии
if resume.session_id != current_session.id: if resume.session_id != current_session.id:
raise HTTPException(status_code=403, detail="Access denied") raise HTTPException(status_code=403, detail="Access denied")
return resume return resume
@ -111,19 +126,19 @@ async def update_resume(
resume_id: int, resume_id: int,
resume: ResumeUpdate, resume: ResumeUpdate,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
service: ResumeService = Depends(ResumeService) service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
existing_resume = await service.get_resume(resume_id) existing_resume = await service.get_resume(resume_id)
if not existing_resume: if not existing_resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Проверяем, что резюме принадлежит текущей сессии # Проверяем, что резюме принадлежит текущей сессии
if existing_resume.session_id != current_session.id: if existing_resume.session_id != current_session.id:
raise HTTPException(status_code=403, detail="Access denied") raise HTTPException(status_code=403, detail="Access denied")
updated_resume = await service.update_resume(resume_id, resume) updated_resume = await service.update_resume(resume_id, resume)
return updated_resume return updated_resume
@ -134,19 +149,19 @@ async def update_resume_status(
resume_id: int, resume_id: int,
status: ResumeStatus, status: ResumeStatus,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
service: ResumeService = Depends(ResumeService) service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
existing_resume = await service.get_resume(resume_id) existing_resume = await service.get_resume(resume_id)
if not existing_resume: if not existing_resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Проверяем, что резюме принадлежит текущей сессии # Проверяем, что резюме принадлежит текущей сессии
if existing_resume.session_id != current_session.id: if existing_resume.session_id != current_session.id:
raise HTTPException(status_code=403, detail="Access denied") raise HTTPException(status_code=403, detail="Access denied")
updated_resume = await service.update_resume_status(resume_id, status) updated_resume = await service.update_resume_status(resume_id, status)
return updated_resume return updated_resume
@ -157,28 +172,31 @@ async def upload_interview_report(
resume_id: int, resume_id: int,
report_file: UploadFile = File(...), report_file: UploadFile = File(...),
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
resume_service: ResumeService = Depends(ResumeService) resume_service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
file_service = FileService() file_service = FileService()
existing_resume = await resume_service.get_resume(resume_id) existing_resume = await resume_service.get_resume(resume_id)
if not existing_resume: if not existing_resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Проверяем, что резюме принадлежит текущей сессии # Проверяем, что резюме принадлежит текущей сессии
if existing_resume.session_id != current_session.id: if existing_resume.session_id != current_session.id:
raise HTTPException(status_code=403, detail="Access denied") raise HTTPException(status_code=403, detail="Access denied")
report_url = await file_service.upload_interview_report(report_file) report_url = await file_service.upload_interview_report(report_file)
if not report_url: if not report_url:
raise HTTPException(status_code=400, detail="Failed to upload interview report") raise HTTPException(status_code=400, detail="Failed to upload interview report")
updated_resume = await resume_service.add_interview_report(resume_id, report_url) updated_resume = await resume_service.add_interview_report(resume_id, report_url)
return {"message": "Interview report uploaded successfully", "report_url": report_url} return {
"message": "Interview report uploaded successfully",
"report_url": report_url,
}
@router.get("/{resume_id}/parsing-status") @router.get("/{resume_id}/parsing-status")
@ -187,62 +205,65 @@ async def get_parsing_status(
resume_id: int, resume_id: int,
task_id: str = Query(..., description="Task ID from resume upload response"), task_id: str = Query(..., description="Task ID from resume upload response"),
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
service: ResumeService = Depends(ResumeService) service: ResumeService = Depends(ResumeService),
): ):
"""Получить статус парсинга резюме по task_id""" """Получить статус парсинга резюме по task_id"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
# Проверяем доступ к резюме # Проверяем доступ к резюме
resume = await service.get_resume(resume_id) resume = await service.get_resume(resume_id)
if not resume: if not resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
if resume.session_id != current_session.id: if resume.session_id != current_session.id:
raise HTTPException(status_code=403, detail="Access denied") raise HTTPException(status_code=403, detail="Access denied")
# Получаем статус задачи из Celery # Получаем статус задачи из Celery
try: try:
task_result = celery_app.AsyncResult(task_id) task_result = celery_app.AsyncResult(task_id)
response = { response = {
"task_id": task_id, "task_id": task_id,
"task_state": task_result.state, "task_state": task_result.state,
"resume_status": resume.status, "resume_status": resume.status,
} }
if task_result.state == 'PENDING': if task_result.state == "PENDING":
response.update({ response.update({"status": "В очереди на обработку", "progress": 0})
"status": "В очереди на обработку", elif task_result.state == "PROGRESS":
"progress": 0 response.update(
}) {
elif task_result.state == 'PROGRESS': "status": task_result.info.get("status", "Обрабатывается"),
response.update({ "progress": task_result.info.get("progress", 0),
"status": task_result.info.get('status', 'Обрабатывается'), }
"progress": task_result.info.get('progress', 0) )
}) elif task_result.state == "SUCCESS":
elif task_result.state == 'SUCCESS': response.update(
response.update({ {
"status": "Завершено успешно", "status": "Завершено успешно",
"progress": 100, "progress": 100,
"result": task_result.info "result": task_result.info,
}) }
elif task_result.state == 'FAILURE': )
response.update({ elif task_result.state == "FAILURE":
"status": f"Ошибка: {str(task_result.info)}", response.update(
"progress": 0, {
"error": str(task_result.info) "status": f"Ошибка: {str(task_result.info)}",
}) "progress": 0,
"error": str(task_result.info),
}
)
return response return response
except Exception as e: except Exception as e:
return { return {
"task_id": task_id, "task_id": task_id,
"task_state": "UNKNOWN", "task_state": "UNKNOWN",
"resume_status": resume.status, "resume_status": resume.status,
"error": f"Failed to get task status: {str(e)}" "error": f"Failed to get task status: {str(e)}",
} }
@ -251,18 +272,18 @@ async def delete_resume(
request: Request, request: Request,
resume_id: int, resume_id: int,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
service: ResumeService = Depends(ResumeService) service: ResumeService = Depends(ResumeService),
): ):
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
existing_resume = await service.get_resume(resume_id) existing_resume = await service.get_resume(resume_id)
if not existing_resume: if not existing_resume:
raise HTTPException(status_code=404, detail="Resume not found") raise HTTPException(status_code=404, detail="Resume not found")
# Проверяем, что резюме принадлежит текущей сессии # Проверяем, что резюме принадлежит текущей сессии
if existing_resume.session_id != current_session.id: if existing_resume.session_id != current_session.id:
raise HTTPException(status_code=403, detail="Access denied") raise HTTPException(status_code=403, detail="Access denied")
success = await service.delete_resume(resume_id) success = await service.delete_resume(resume_id)
return {"message": "Resume deleted successfully"} return {"message": "Resume deleted successfully"}

View File

@ -1,11 +1,11 @@
from fastapi import APIRouter, Depends, Request, HTTPException
from fastapi.responses import JSONResponse
from app.core.session_middleware import get_current_session
from app.repositories.session_repository import SessionRepository
from app.models.session import Session, SessionRead
from typing import Optional
import logging import logging
from fastapi import APIRouter, Depends, HTTPException, Request
from fastapi.responses import JSONResponse
from app.core.session_middleware import get_current_session
from app.models.session import Session, SessionRead
from app.repositories.session_repository import SessionRepository
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
router = APIRouter(prefix="/sessions", tags=["Sessions"]) router = APIRouter(prefix="/sessions", tags=["Sessions"])
@ -13,13 +13,12 @@ router = APIRouter(prefix="/sessions", tags=["Sessions"])
@router.get("/current", response_model=SessionRead) @router.get("/current", response_model=SessionRead)
async def get_current_session_info( async def get_current_session_info(
request: Request, request: Request, current_session: Session = Depends(get_current_session)
current_session: Session = Depends(get_current_session)
): ):
"""Получить информацию о текущей сессии""" """Получить информацию о текущей сессии"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
return SessionRead( return SessionRead(
id=current_session.id, id=current_session.id,
session_id=current_session.session_id, session_id=current_session.session_id,
@ -29,7 +28,7 @@ async def get_current_session_info(
expires_at=current_session.expires_at, expires_at=current_session.expires_at,
last_activity=current_session.last_activity, last_activity=current_session.last_activity,
created_at=current_session.created_at, created_at=current_session.created_at,
updated_at=current_session.updated_at updated_at=current_session.updated_at,
) )
@ -37,22 +36,22 @@ async def get_current_session_info(
async def refresh_session( async def refresh_session(
request: Request, request: Request,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
session_repo: SessionRepository = Depends(SessionRepository) session_repo: SessionRepository = Depends(SessionRepository),
): ):
"""Продлить сессию на 30 дней""" """Продлить сессию на 30 дней"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
current_session.extend_session(days=30) current_session.extend_session(days=30)
# Обновляем через репозиторий # Обновляем через репозиторий
await session_repo.update_last_activity(current_session.session_id) await session_repo.update_last_activity(current_session.session_id)
logger.info(f"Extended session {current_session.session_id}") logger.info(f"Extended session {current_session.session_id}")
return { return {
"message": "Session extended successfully", "message": "Session extended successfully",
"expires_at": current_session.expires_at, "expires_at": current_session.expires_at,
"session_id": current_session.session_id "session_id": current_session.session_id,
} }
@ -60,13 +59,13 @@ async def refresh_session(
async def logout( async def logout(
request: Request, request: Request,
current_session: Session = Depends(get_current_session), current_session: Session = Depends(get_current_session),
session_repo: SessionRepository = Depends(SessionRepository) session_repo: SessionRepository = Depends(SessionRepository),
): ):
"""Завершить текущую сессию""" """Завершить текущую сессию"""
if not current_session: if not current_session:
raise HTTPException(status_code=401, detail="No active session") raise HTTPException(status_code=401, detail="No active session")
deactivated = await session_repo.deactivate_session(current_session.session_id) deactivated = await session_repo.deactivate_session(current_session.session_id)
if deactivated: if deactivated:
logger.info(f"Deactivated session {current_session.session_id}") logger.info(f"Deactivated session {current_session.session_id}")
response = JSONResponse(content={"message": "Logged out successfully"}) response = JSONResponse(content={"message": "Logged out successfully"})
@ -82,5 +81,5 @@ async def session_health_check():
return { return {
"status": "healthy", "status": "healthy",
"service": "session_management", "service": "session_management",
"message": "Session management is working properly" "message": "Session management is working properly",
} }

View File

@ -1,29 +1,27 @@
from fastapi import APIRouter, Depends, HTTPException, Query from fastapi import APIRouter, Depends, HTTPException, Query
from typing import List, Optional
from app.models.vacancy import VacancyCreate, VacancyRead, VacancyUpdate
from app.services.vacancy_service import VacancyService from app.services.vacancy_service import VacancyService
from app.models.vacancy import VacancyCreate, VacancyUpdate, VacancyRead
router = APIRouter(prefix="/vacancies", tags=["vacancies"]) router = APIRouter(prefix="/vacancies", tags=["vacancies"])
@router.post("/", response_model=VacancyRead) @router.post("/", response_model=VacancyRead)
async def create_vacancy( async def create_vacancy(
vacancy: VacancyCreate, vacancy: VacancyCreate, vacancy_service: VacancyService = Depends(VacancyService)
vacancy_service: VacancyService = Depends(VacancyService)
): ):
return await vacancy_service.create_vacancy(vacancy) return await vacancy_service.create_vacancy(vacancy)
@router.get("/", response_model=List[VacancyRead]) @router.get("/", response_model=list[VacancyRead])
async def get_vacancies( async def get_vacancies(
skip: int = Query(0, ge=0), skip: int = Query(0, ge=0),
limit: int = Query(100, ge=1, le=1000), limit: int = Query(100, ge=1, le=1000),
active_only: bool = Query(False), active_only: bool = Query(False),
title: Optional[str] = Query(None), title: str | None = Query(None),
company_name: Optional[str] = Query(None), company_name: str | None = Query(None),
area_name: Optional[str] = Query(None), area_name: str | None = Query(None),
vacancy_service: VacancyService = Depends(VacancyService) vacancy_service: VacancyService = Depends(VacancyService),
): ):
if any([title, company_name, area_name]): if any([title, company_name, area_name]):
return await vacancy_service.search_vacancies( return await vacancy_service.search_vacancies(
@ -31,19 +29,18 @@ async def get_vacancies(
company_name=company_name, company_name=company_name,
area_name=area_name, area_name=area_name,
skip=skip, skip=skip,
limit=limit limit=limit,
) )
if active_only: if active_only:
return await vacancy_service.get_active_vacancies(skip=skip, limit=limit) return await vacancy_service.get_active_vacancies(skip=skip, limit=limit)
return await vacancy_service.get_all_vacancies(skip=skip, limit=limit) return await vacancy_service.get_all_vacancies(skip=skip, limit=limit)
@router.get("/{vacancy_id}", response_model=VacancyRead) @router.get("/{vacancy_id}", response_model=VacancyRead)
async def get_vacancy( async def get_vacancy(
vacancy_id: int, vacancy_id: int, vacancy_service: VacancyService = Depends(VacancyService)
vacancy_service: VacancyService = Depends(VacancyService)
): ):
vacancy = await vacancy_service.get_vacancy(vacancy_id) vacancy = await vacancy_service.get_vacancy(vacancy_id)
if not vacancy: if not vacancy:
@ -55,7 +52,7 @@ async def get_vacancy(
async def update_vacancy( async def update_vacancy(
vacancy_id: int, vacancy_id: int,
vacancy: VacancyUpdate, vacancy: VacancyUpdate,
vacancy_service: VacancyService = Depends(VacancyService) vacancy_service: VacancyService = Depends(VacancyService),
): ):
updated_vacancy = await vacancy_service.update_vacancy(vacancy_id, vacancy) updated_vacancy = await vacancy_service.update_vacancy(vacancy_id, vacancy)
if not updated_vacancy: if not updated_vacancy:
@ -65,8 +62,7 @@ async def update_vacancy(
@router.delete("/{vacancy_id}") @router.delete("/{vacancy_id}")
async def delete_vacancy( async def delete_vacancy(
vacancy_id: int, vacancy_id: int, vacancy_service: VacancyService = Depends(VacancyService)
vacancy_service: VacancyService = Depends(VacancyService)
): ):
success = await vacancy_service.delete_vacancy(vacancy_id) success = await vacancy_service.delete_vacancy(vacancy_id)
if not success: if not success:
@ -76,10 +72,9 @@ async def delete_vacancy(
@router.patch("/{vacancy_id}/archive", response_model=VacancyRead) @router.patch("/{vacancy_id}/archive", response_model=VacancyRead)
async def archive_vacancy( async def archive_vacancy(
vacancy_id: int, vacancy_id: int, vacancy_service: VacancyService = Depends(VacancyService)
vacancy_service: VacancyService = Depends(VacancyService)
): ):
archived_vacancy = await vacancy_service.archive_vacancy(vacancy_id) archived_vacancy = await vacancy_service.archive_vacancy(vacancy_id)
if not archived_vacancy: if not archived_vacancy:
raise HTTPException(status_code=404, detail="Vacancy not found") raise HTTPException(status_code=404, detail="Vacancy not found")
return archived_vacancy return archived_vacancy

View File

@ -1,5 +1,5 @@
from .vacancy_service import VacancyService
from .resume_service import ResumeService
from .file_service import FileService from .file_service import FileService
from .resume_service import ResumeService
from .vacancy_service import VacancyService
__all__ = ["VacancyService", "ResumeService", "FileService"] __all__ = ["VacancyService", "ResumeService", "FileService"]

View File

@ -1,10 +1,11 @@
from typing import Dict, List, Annotated from typing import Annotated
from fastapi import Depends from fastapi import Depends
from app.repositories.interview_repository import InterviewRepository from app.repositories.interview_repository import InterviewRepository
from app.repositories.resume_repository import ResumeRepository from app.repositories.resume_repository import ResumeRepository
from app.services.interview_service import InterviewRoomService
from app.services.interview_finalization_service import InterviewFinalizationService from app.services.interview_finalization_service import InterviewFinalizationService
from app.services.interview_service import InterviewRoomService
class AdminService: class AdminService:
@ -12,8 +13,12 @@ class AdminService:
self, self,
interview_repo: Annotated[InterviewRepository, Depends(InterviewRepository)], interview_repo: Annotated[InterviewRepository, Depends(InterviewRepository)],
resume_repo: Annotated[ResumeRepository, Depends(ResumeRepository)], resume_repo: Annotated[ResumeRepository, Depends(ResumeRepository)],
interview_service: Annotated[InterviewRoomService, Depends(InterviewRoomService)], interview_service: Annotated[
finalization_service: Annotated[InterviewFinalizationService, Depends(InterviewFinalizationService)] InterviewRoomService, Depends(InterviewRoomService)
],
finalization_service: Annotated[
InterviewFinalizationService, Depends(InterviewFinalizationService)
],
): ):
self.interview_repo = interview_repo self.interview_repo = interview_repo
self.resume_repo = resume_repo self.resume_repo = resume_repo
@ -23,10 +28,11 @@ class AdminService:
async def get_active_interview_processes(self): async def get_active_interview_processes(self):
"""Получить список активных AI процессов""" """Получить список активных AI процессов"""
active_sessions = await self.interview_service.get_active_agent_processes() active_sessions = await self.interview_service.get_active_agent_processes()
import psutil import psutil
processes_info = [] processes_info = []
for session in active_sessions: for session in active_sessions:
process_info = { process_info = {
"session_id": session.id, "session_id": session.id,
@ -34,66 +40,81 @@ class AdminService:
"room_name": session.room_name, "room_name": session.room_name,
"pid": session.ai_agent_pid, "pid": session.ai_agent_pid,
"status": session.ai_agent_status, "status": session.ai_agent_status,
"started_at": session.started_at.isoformat() if session.started_at else None, "started_at": session.started_at.isoformat()
if session.started_at
else None,
"is_running": False, "is_running": False,
"memory_mb": 0, "memory_mb": 0,
"cpu_percent": 0 "cpu_percent": 0,
} }
if session.ai_agent_pid: if session.ai_agent_pid:
try: try:
process = psutil.Process(session.ai_agent_pid) process = psutil.Process(session.ai_agent_pid)
if process.is_running(): if process.is_running():
process_info["is_running"] = True process_info["is_running"] = True
process_info["memory_mb"] = round(process.memory_info().rss / 1024 / 1024, 1) process_info["memory_mb"] = round(
process_info["cpu_percent"] = round(process.cpu_percent(interval=0.1), 1) process.memory_info().rss / 1024 / 1024, 1
)
process_info["cpu_percent"] = round(
process.cpu_percent(interval=0.1), 1
)
except (psutil.NoSuchProcess, psutil.AccessDenied): except (psutil.NoSuchProcess, psutil.AccessDenied):
pass pass
processes_info.append(process_info) processes_info.append(process_info)
return { return {
"active_processes": len([p for p in processes_info if p["is_running"]]), "active_processes": len([p for p in processes_info if p["is_running"]]),
"total_sessions": len(processes_info), "total_sessions": len(processes_info),
"processes": processes_info "processes": processes_info,
} }
async def stop_interview_process(self, session_id: int): async def stop_interview_process(self, session_id: int):
"""Остановить AI процесс интервью""" """Остановить AI процесс интервью"""
success = await self.interview_service.stop_agent_process(session_id) success = await self.interview_service.stop_agent_process(session_id)
return { return {
"success": success, "success": success,
"message": f"Process for session {session_id} {'stopped' if success else 'failed to stop'}" "message": f"Process for session {session_id} {'stopped' if success else 'failed to stop'}",
} }
async def cleanup_dead_processes(self): async def cleanup_dead_processes(self):
"""Очистить информацию о мертвых процессах""" """Очистить информацию о мертвых процессах"""
cleaned_count = await self.finalization_service.cleanup_dead_processes() cleaned_count = await self.finalization_service.cleanup_dead_processes()
return { return {
"cleaned_processes": cleaned_count, "cleaned_processes": cleaned_count,
"message": f"Cleaned up {cleaned_count} dead processes" "message": f"Cleaned up {cleaned_count} dead processes",
} }
async def get_analytics_dashboard(self) -> Dict: async def get_analytics_dashboard(self) -> dict:
"""Основная аналитическая панель""" """Основная аналитическая панель"""
all_resumes = await self.resume_repo.get_all() all_resumes = await self.resume_repo.get_all()
status_stats = {} status_stats = {}
for resume in all_resumes: for resume in all_resumes:
status = resume.status.value if hasattr(resume.status, 'value') else str(resume.status) status = (
resume.status.value
if hasattr(resume.status, "value")
else str(resume.status)
)
status_stats[status] = status_stats.get(status, 0) + 1 status_stats[status] = status_stats.get(status, 0) + 1
analyzed_count = 0 analyzed_count = 0
recommendation_stats = {"strongly_recommend": 0, "recommend": 0, "consider": 0, "reject": 0} recommendation_stats = {
"strongly_recommend": 0,
"recommend": 0,
"consider": 0,
"reject": 0,
}
for resume in all_resumes: for resume in all_resumes:
if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes: if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes:
analyzed_count += 1 analyzed_count += 1
notes = resume.notes.lower() notes = resume.notes.lower()
if "strongly_recommend" in notes: if "strongly_recommend" in notes:
recommendation_stats["strongly_recommend"] += 1 recommendation_stats["strongly_recommend"] += 1
elif "recommend" in notes and "strongly_recommend" not in notes: elif "recommend" in notes and "strongly_recommend" not in notes:
@ -102,156 +123,196 @@ class AdminService:
recommendation_stats["consider"] += 1 recommendation_stats["consider"] += 1
elif "reject" in notes: elif "reject" in notes:
recommendation_stats["reject"] += 1 recommendation_stats["reject"] += 1
recent_resumes = sorted(all_resumes, key=lambda x: x.updated_at, reverse=True)[:10] recent_resumes = sorted(all_resumes, key=lambda x: x.updated_at, reverse=True)[
:10
]
recent_activity = [] recent_activity = []
for resume in recent_resumes: for resume in recent_resumes:
activity_item = { activity_item = {
"resume_id": resume.id, "resume_id": resume.id,
"candidate_name": resume.applicant_name, "candidate_name": resume.applicant_name,
"status": resume.status.value if hasattr(resume.status, 'value') else str(resume.status), "status": resume.status.value
"updated_at": resume.updated_at.isoformat() if resume.updated_at else None, if hasattr(resume.status, "value")
"has_analysis": resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes else str(resume.status),
"updated_at": resume.updated_at.isoformat()
if resume.updated_at
else None,
"has_analysis": resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes,
} }
recent_activity.append(activity_item) recent_activity.append(activity_item)
return { return {
"summary": { "summary": {
"total_candidates": len(all_resumes), "total_candidates": len(all_resumes),
"interviewed_candidates": status_stats.get("interviewed", 0), "interviewed_candidates": status_stats.get("interviewed", 0),
"analyzed_candidates": analyzed_count, "analyzed_candidates": analyzed_count,
"analysis_completion_rate": round((analyzed_count / max(len(all_resumes), 1)) * 100, 1) "analysis_completion_rate": round(
(analyzed_count / max(len(all_resumes), 1)) * 100, 1
),
}, },
"status_distribution": status_stats, "status_distribution": status_stats,
"recommendation_distribution": recommendation_stats, "recommendation_distribution": recommendation_stats,
"recent_activity": recent_activity "recent_activity": recent_activity,
} }
async def get_vacancy_analytics(self, vacancy_id: int) -> Dict: async def get_vacancy_analytics(self, vacancy_id: int) -> dict:
"""Аналитика кандидатов по конкретной вакансии""" """Аналитика кандидатов по конкретной вакансии"""
vacancy_resumes = await self.resume_repo.get_by_vacancy_id(vacancy_id) vacancy_resumes = await self.resume_repo.get_by_vacancy_id(vacancy_id)
if not vacancy_resumes: if not vacancy_resumes:
return { return {
"vacancy_id": vacancy_id, "vacancy_id": vacancy_id,
"message": "No candidates found for this vacancy", "message": "No candidates found for this vacancy",
"candidates": [] "candidates": [],
} }
candidates_info = [] candidates_info = []
for resume in vacancy_resumes: for resume in vacancy_resumes:
overall_score = None overall_score = None
recommendation = None recommendation = None
if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes: if resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes:
notes = resume.notes notes = resume.notes
if "Общий балл:" in notes: if "Общий балл:" in notes:
try: try:
score_line = [line for line in notes.split('\n') if "Общий балл:" in line][0] score_line = [
overall_score = int(score_line.split("Общий балл:")[1].split("/")[0].strip()) line for line in notes.split("\n") if "Общий балл:" in line
][0]
overall_score = int(
score_line.split("Общий балл:")[1].split("/")[0].strip()
)
except: except:
pass pass
if "Рекомендация:" in notes: if "Рекомендация:" in notes:
try: try:
rec_line = [line for line in notes.split('\n') if "Рекомендация:" in line][0] rec_line = [
line
for line in notes.split("\n")
if "Рекомендация:" in line
][0]
recommendation = rec_line.split("Рекомендация:")[1].strip() recommendation = rec_line.split("Рекомендация:")[1].strip()
except: except:
pass pass
candidate_info = { candidate_info = {
"resume_id": resume.id, "resume_id": resume.id,
"candidate_name": resume.applicant_name, "candidate_name": resume.applicant_name,
"email": resume.applicant_email, "email": resume.applicant_email,
"status": resume.status.value if hasattr(resume.status, 'value') else str(resume.status), "status": resume.status.value
"created_at": resume.created_at.isoformat() if resume.created_at else None, if hasattr(resume.status, "value")
"updated_at": resume.updated_at.isoformat() if resume.updated_at else None, else str(resume.status),
"created_at": resume.created_at.isoformat()
if resume.created_at
else None,
"updated_at": resume.updated_at.isoformat()
if resume.updated_at
else None,
"has_analysis": resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes, "has_analysis": resume.notes and "ОЦЕНКА КАНДИДАТА" in resume.notes,
"overall_score": overall_score, "overall_score": overall_score,
"recommendation": recommendation, "recommendation": recommendation,
"has_parsed_data": bool(resume.parsed_data), "has_parsed_data": bool(resume.parsed_data),
"has_interview_plan": bool(resume.interview_plan) "has_interview_plan": bool(resume.interview_plan),
} }
candidates_info.append(candidate_info) candidates_info.append(candidate_info)
candidates_info.sort(key=lambda x: (x['overall_score'] or 0, x['updated_at'] or ''), reverse=True) candidates_info.sort(
key=lambda x: (x["overall_score"] or 0, x["updated_at"] or ""), reverse=True
)
return { return {
"vacancy_id": vacancy_id, "vacancy_id": vacancy_id,
"total_candidates": len(candidates_info), "total_candidates": len(candidates_info),
"candidates": candidates_info "candidates": candidates_info,
} }
async def generate_reports_for_vacancy(self, vacancy_id: int) -> Dict: async def generate_reports_for_vacancy(self, vacancy_id: int) -> dict:
"""Запустить генерацию отчетов для всех кандидатов вакансии""" """Запустить генерацию отчетов для всех кандидатов вакансии"""
from celery_worker.interview_analysis_task import analyze_multiple_candidates from celery_worker.interview_analysis_task import analyze_multiple_candidates
vacancy_resumes = await self.resume_repo.get_by_vacancy_id(vacancy_id) vacancy_resumes = await self.resume_repo.get_by_vacancy_id(vacancy_id)
interviewed_resumes = [r for r in vacancy_resumes if r.status in ["interviewed"]] interviewed_resumes = [
r for r in vacancy_resumes if r.status in ["interviewed"]
]
if not interviewed_resumes: if not interviewed_resumes:
return { return {
"error": "No interviewed candidates found for this vacancy", "error": "No interviewed candidates found for this vacancy",
"vacancy_id": vacancy_id "vacancy_id": vacancy_id,
} }
resume_ids = [r.id for r in interviewed_resumes] resume_ids = [r.id for r in interviewed_resumes]
task = analyze_multiple_candidates.delay(resume_ids) task = analyze_multiple_candidates.delay(resume_ids)
return { return {
"vacancy_id": vacancy_id, "vacancy_id": vacancy_id,
"task_id": task.id, "task_id": task.id,
"message": f"Analysis started for {len(resume_ids)} candidates", "message": f"Analysis started for {len(resume_ids)} candidates",
"resume_ids": resume_ids "resume_ids": resume_ids,
} }
async def get_system_stats(self) -> Dict: async def get_system_stats(self) -> dict:
"""Общая статистика системы""" """Общая статистика системы"""
import psutil import psutil
try: try:
cpu_percent = psutil.cpu_percent(interval=1) cpu_percent = psutil.cpu_percent(interval=1)
memory = psutil.virtual_memory() memory = psutil.virtual_memory()
disk = psutil.disk_usage('/') disk = psutil.disk_usage("/")
python_processes = [] python_processes = []
for proc in psutil.process_iter(['pid', 'name', 'memory_info', 'cpu_percent', 'cmdline']): for proc in psutil.process_iter(
["pid", "name", "memory_info", "cpu_percent", "cmdline"]
):
try: try:
if proc.info['name'] and 'python' in proc.info['name'].lower(): if proc.info["name"] and "python" in proc.info["name"].lower():
cmdline = ' '.join(proc.info['cmdline']) if proc.info['cmdline'] else '' cmdline = (
if 'ai_interviewer_agent' in cmdline: " ".join(proc.info["cmdline"])
python_processes.append({ if proc.info["cmdline"]
'pid': proc.info['pid'], else ""
'memory_mb': round(proc.info['memory_info'].rss / 1024 / 1024, 1), )
'cpu_percent': proc.info['cpu_percent'] or 0, if "ai_interviewer_agent" in cmdline:
'cmdline': cmdline python_processes.append(
}) {
except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): "pid": proc.info["pid"],
"memory_mb": round(
proc.info["memory_info"].rss / 1024 / 1024, 1
),
"cpu_percent": proc.info["cpu_percent"] or 0,
"cmdline": cmdline,
}
)
except (
psutil.NoSuchProcess,
psutil.AccessDenied,
psutil.ZombieProcess,
):
pass pass
return { return {
"system": { "system": {
"cpu_percent": cpu_percent, "cpu_percent": cpu_percent,
"memory_percent": memory.percent, "memory_percent": memory.percent,
"memory_available_gb": round(memory.available / 1024 / 1024 / 1024, 1), "memory_available_gb": round(
memory.available / 1024 / 1024 / 1024, 1
),
"disk_percent": disk.percent, "disk_percent": disk.percent,
"disk_free_gb": round(disk.free / 1024 / 1024 / 1024, 1) "disk_free_gb": round(disk.free / 1024 / 1024 / 1024, 1),
}, },
"ai_agents": { "ai_agents": {
"count": len(python_processes), "count": len(python_processes),
"total_memory_mb": sum(p['memory_mb'] for p in python_processes), "total_memory_mb": sum(p["memory_mb"] for p in python_processes),
"processes": python_processes "processes": python_processes,
} },
} }
except Exception as e: except Exception as e:
return { return {"error": f"Error getting system stats: {str(e)}"}
"error": f"Error getting system stats: {str(e)}"
}

View File

@ -0,0 +1,297 @@
import asyncio
import json
import logging
import os
import subprocess
from dataclasses import dataclass
from datetime import UTC, datetime
import psutil
from app.core.config import settings
logger = logging.getLogger(__name__)
@dataclass
class AgentProcess:
pid: int
session_id: int | None
room_name: str | None
started_at: datetime
status: str # "idle", "active", "stopping"
class AgentManager:
"""Singleton менеджер для управления AI агентом интервьюера"""
_instance: "AgentManager | None" = None
_agent_process: AgentProcess | None = None
_lock = asyncio.Lock()
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self):
if not hasattr(self, "_initialized"):
self._initialized = True
self.livekit_url = settings.livekit_url or "ws://localhost:7880"
self.api_key = settings.livekit_api_key or "devkey"
self.api_secret = (
settings.livekit_api_secret or "devkey_secret_32chars_minimum_length"
)
async def start_agent(self) -> bool:
"""Запускает AI агента в режиме ожидания (без конкретной сессии)"""
async with self._lock:
if self._agent_process and self._is_process_alive(self._agent_process.pid):
logger.info(f"Agent already running with PID {self._agent_process.pid}")
return True
try:
# Запускаем агента в режиме worker (будет ждать подключения к комнатам)
agent_cmd = [
"uv",
"run",
"ai_interviewer_agent.py",
"start",
"--url",
self.livekit_url,
"--api-key",
self.api_key,
"--api-secret",
self.api_secret,
]
# Настройка окружения
env = os.environ.copy()
env.update(
{
"OPENAI_API_KEY": settings.openai_api_key or "",
"DEEPGRAM_API_KEY": settings.deepgram_api_key or "",
"CARTESIA_API_KEY": settings.cartesia_api_key or "",
"PYTHONIOENCODING": "utf-8",
}
)
# Запуск процесса
with open("ai_agent.log", "w") as log_file:
process = subprocess.Popen(
agent_cmd,
env=env,
stdout=log_file,
stderr=subprocess.STDOUT,
cwd=".",
)
self._agent_process = AgentProcess(
pid=process.pid,
session_id=None,
room_name=None,
started_at=datetime.now(UTC),
status="idle",
)
logger.info(f"AI Agent started with PID {process.pid}")
return True
except Exception as e:
logger.error(f"Failed to start AI agent: {e}")
return False
async def stop_agent(self) -> bool:
"""Останавливает AI агента"""
async with self._lock:
if not self._agent_process:
return True
try:
if self._is_process_alive(self._agent_process.pid):
process = psutil.Process(self._agent_process.pid)
# Сначала пытаемся graceful shutdown
process.terminate()
# Ждем до 10 секунд
for _ in range(100):
if not process.is_running():
break
await asyncio.sleep(0.1)
# Если не завершился, убиваем принудительно
if process.is_running():
process.kill()
logger.info(f"AI Agent with PID {self._agent_process.pid} stopped")
self._agent_process = None
return True
except Exception as e:
logger.error(f"Error stopping AI agent: {e}")
self._agent_process = None
return False
async def assign_session(
self, session_id: int, room_name: str, interview_plan: dict
) -> bool:
"""Назначает агенту конкретную сессию интервью"""
async with self._lock:
if not self._agent_process or not self._is_process_alive(
self._agent_process.pid
):
logger.error("No active agent to assign session to")
return False
if self._agent_process.status == "active":
logger.error(
f"Agent is busy with session {self._agent_process.session_id}"
)
return False
try:
# Создаем файл метаданных для сессии
metadata_file = f"session_metadata_{session_id}.json"
with open(metadata_file, "w", encoding="utf-8") as f:
json.dump(
{
"session_id": session_id,
"room_name": room_name,
"interview_plan": interview_plan,
"command": "start_interview",
},
f,
ensure_ascii=False,
indent=2,
)
# Отправляем сигнал агенту через файл команд
command_file = "agent_commands.json"
with open(command_file, "w", encoding="utf-8") as f:
json.dump(
{
"action": "start_session",
"session_id": session_id,
"room_name": room_name,
"metadata_file": metadata_file,
"timestamp": datetime.now(UTC).isoformat(),
},
f,
ensure_ascii=False,
indent=2,
)
# Обновляем статус агента
self._agent_process.session_id = session_id
self._agent_process.room_name = room_name
self._agent_process.status = "active"
logger.info(
f"Assigned session {session_id} to agent PID {self._agent_process.pid}"
)
return True
except Exception as e:
logger.error(f"Error assigning session to agent: {e}")
return False
async def release_session(self) -> bool:
"""Освобождает агента от текущей сессии"""
async with self._lock:
if not self._agent_process:
return True
try:
# Отправляем команду завершения сессии
command_file = "agent_commands.json"
with open(command_file, "w", encoding="utf-8") as f:
json.dump(
{
"action": "end_session",
"session_id": self._agent_process.session_id,
"timestamp": datetime.now(UTC).isoformat(),
},
f,
ensure_ascii=False,
indent=2,
)
# Очищаем файлы метаданных
if self._agent_process.session_id:
try:
os.remove(
f"session_metadata_{self._agent_process.session_id}.json"
)
except FileNotFoundError:
pass
# Возвращаем агента в режим ожидания
self._agent_process.session_id = None
self._agent_process.room_name = None
self._agent_process.status = "idle"
logger.info("Released agent from current session")
return True
except Exception as e:
logger.error(f"Error releasing agent session: {e}")
return False
def get_status(self) -> dict:
"""Возвращает текущий статус агента"""
if not self._agent_process:
return {
"status": "stopped",
"pid": None,
"session_id": None,
"room_name": None,
"uptime": None,
}
is_alive = self._is_process_alive(self._agent_process.pid)
if not is_alive:
self._agent_process = None
return {
"status": "dead",
"pid": None,
"session_id": None,
"room_name": None,
"uptime": None,
}
uptime = datetime.now(UTC) - self._agent_process.started_at
return {
"status": self._agent_process.status,
"pid": self._agent_process.pid,
"session_id": self._agent_process.session_id,
"room_name": self._agent_process.room_name,
"uptime": str(uptime),
"started_at": self._agent_process.started_at.isoformat(),
}
def is_available(self) -> bool:
"""Проверяет, доступен ли агент для новой сессии"""
if not self._agent_process:
return False
if not self._is_process_alive(self._agent_process.pid):
self._agent_process = None
return False
return self._agent_process.status == "idle"
def _is_process_alive(self, pid: int) -> bool:
"""Проверяет, жив ли процесс"""
try:
process = psutil.Process(pid)
return process.is_running()
except psutil.NoSuchProcess:
return False
except Exception:
return False
# Глобальный экземпляр менеджера
agent_manager = AgentManager()

View File

@ -1,68 +1,71 @@
import asyncio import asyncio
import json import json
import logging import logging
from typing import Dict, Optional, List
from datetime import datetime from datetime import datetime
from livekit import api, rtc
from livekit import rtc
from rag.settings import settings from rag.settings import settings
from app.services.interview_service import InterviewRoomService
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class AIInterviewerService: class AIInterviewerService:
"""Сервис AI интервьюера, который подключается к LiveKit комнате как участник""" """Сервис AI интервьюера, который подключается к LiveKit комнате как участник"""
def __init__(self, interview_session_id: int, resume_data: Dict): def __init__(self, interview_session_id: int, resume_data: dict):
self.interview_session_id = interview_session_id self.interview_session_id = interview_session_id
self.resume_data = resume_data self.resume_data = resume_data
self.room: Optional[rtc.Room] = None self.room: rtc.Room | None = None
self.audio_source: Optional[rtc.AudioSource] = None self.audio_source: rtc.AudioSource | None = None
self.conversation_history: List[Dict] = [] self.conversation_history: list[dict] = []
self.current_question_index = 0 self.current_question_index = 0
self.interview_questions = [] self.interview_questions = []
async def connect_to_room(self, room_name: str, token: str): async def connect_to_room(self, room_name: str, token: str):
"""Подключение AI агента к LiveKit комнате""" """Подключение AI агента к LiveKit комнате"""
try: try:
self.room = rtc.Room() self.room = rtc.Room()
# Настройка обработчиков событий # Настройка обработчиков событий
self.room.on("participant_connected", self.on_participant_connected) self.room.on("participant_connected", self.on_participant_connected)
self.room.on("track_subscribed", self.on_track_subscribed) self.room.on("track_subscribed", self.on_track_subscribed)
self.room.on("data_received", self.on_data_received) self.room.on("data_received", self.on_data_received)
# Подключение к комнате # Подключение к комнате
await self.room.connect(settings.livekit_url, token) await self.room.connect(settings.livekit_url, token)
logger.info(f"AI agent connected to room: {room_name}") logger.info(f"AI agent connected to room: {room_name}")
# Создание аудио источника для TTS # Создание аудио источника для TTS
self.audio_source = rtc.AudioSource(sample_rate=16000, num_channels=1) self.audio_source = rtc.AudioSource(sample_rate=16000, num_channels=1)
track = rtc.LocalAudioTrack.create_audio_track("ai_voice", self.audio_source) track = rtc.LocalAudioTrack.create_audio_track(
"ai_voice", self.audio_source
)
# Публикация аудио трека # Публикация аудио трека
await self.room.local_participant.publish_track(track, rtc.TrackPublishOptions()) await self.room.local_participant.publish_track(
track, rtc.TrackPublishOptions()
)
# Генерация первого вопроса # Генерация первого вопроса
await self.generate_interview_questions() await self.generate_interview_questions()
await self.start_interview() await self.start_interview()
except Exception as e: except Exception as e:
logger.error(f"Error connecting to room: {str(e)}") logger.error(f"Error connecting to room: {str(e)}")
raise raise
async def on_participant_connected(self, participant: rtc.RemoteParticipant): async def on_participant_connected(self, participant: rtc.RemoteParticipant):
"""Обработка подключения пользователя""" """Обработка подключения пользователя"""
logger.info(f"Participant connected: {participant.identity}") logger.info(f"Participant connected: {participant.identity}")
# Можем отправить приветственное сообщение # Можем отправить приветственное сообщение
await self.send_message({ await self.send_message({"type": "ai_speaking_start"})
"type": "ai_speaking_start"
})
async def on_track_subscribed( async def on_track_subscribed(
self, self,
track: rtc.Track, track: rtc.Track,
publication: rtc.RemoteTrackPublication, publication: rtc.RemoteTrackPublication,
participant: rtc.RemoteParticipant participant: rtc.RemoteParticipant,
): ):
"""Обработка получения аудио трека от пользователя""" """Обработка получения аудио трека от пользователя"""
if track.kind == rtc.TrackKind.KIND_AUDIO: if track.kind == rtc.TrackKind.KIND_AUDIO:
@ -70,7 +73,7 @@ class AIInterviewerService:
# Настройка обработки аудио для STT # Настройка обработки аудио для STT
audio_stream = rtc.AudioStream(track) audio_stream = rtc.AudioStream(track)
asyncio.create_task(self.process_user_audio(audio_stream)) asyncio.create_task(self.process_user_audio(audio_stream))
async def on_data_received(self, data: bytes, participant: rtc.RemoteParticipant): async def on_data_received(self, data: bytes, participant: rtc.RemoteParticipant):
"""Обработка сообщений от фронтенда""" """Обработка сообщений от фронтенда"""
try: try:
@ -78,11 +81,11 @@ class AIInterviewerService:
await self.handle_frontend_message(message) await self.handle_frontend_message(message)
except Exception as e: except Exception as e:
logger.error(f"Error processing data message: {str(e)}") logger.error(f"Error processing data message: {str(e)}")
async def handle_frontend_message(self, message: Dict): async def handle_frontend_message(self, message: dict):
"""Обработка сообщений от фронтенда""" """Обработка сообщений от фронтенда"""
msg_type = message.get("type") msg_type = message.get("type")
if msg_type == "start_interview": if msg_type == "start_interview":
await self.start_interview() await self.start_interview()
elif msg_type == "end_interview": elif msg_type == "end_interview":
@ -90,7 +93,7 @@ class AIInterviewerService:
elif msg_type == "user_finished_speaking": elif msg_type == "user_finished_speaking":
# Пользователь закончил говорить, можем обрабатывать его ответ # Пользователь закончил говорить, можем обрабатывать его ответ
pass pass
async def process_user_audio(self, audio_stream: rtc.AudioStream): async def process_user_audio(self, audio_stream: rtc.AudioStream):
"""Обработка аудио от пользователя через STT""" """Обработка аудио от пользователя через STT"""
try: try:
@ -105,22 +108,22 @@ class AIInterviewerService:
except Exception as e: except Exception as e:
logger.error(f"Error processing user audio: {str(e)}") logger.error(f"Error processing user audio: {str(e)}")
async def generate_interview_questions(self): async def generate_interview_questions(self):
"""Генерация вопросов для интервью на основе резюме""" """Генерация вопросов для интервью на основе резюме"""
try: try:
from rag.registry import registry from rag.registry import registry
chat_model = registry.get_chat_model() chat_model = registry.get_chat_model()
# Используем существующую логику генерации вопросов # Используем существующую логику генерации вопросов
questions_prompt = f""" questions_prompt = f"""
Сгенерируй 8 вопросов для голосового собеседования кандидата. Сгенерируй 8 вопросов для голосового собеседования кандидата.
РЕЗЮМЕ КАНДИДАТА: РЕЗЮМЕ КАНДИДАТА:
Имя: {self.resume_data.get('name', 'Не указано')} Имя: {self.resume_data.get("name", "Не указано")}
Навыки: {', '.join(self.resume_data.get('skills', []))} Навыки: {", ".join(self.resume_data.get("skills", []))}
Опыт работы: {self.resume_data.get('total_years', 0)} лет Опыт работы: {self.resume_data.get("total_years", 0)} лет
Образование: {self.resume_data.get('education', 'Не указано')} Образование: {self.resume_data.get("education", "Не указано")}
ВАЖНО: ВАЖНО:
1. Вопросы должны быть короткими и ясными для голосового формата 1. Вопросы должны быть короткими и ясными для голосового формата
@ -131,18 +134,21 @@ class AIInterviewerService:
Верни только JSON массив строк с вопросами: Верни только JSON массив строк с вопросами:
["Привет! Расскажи немного о себе", "Какой у тебя опыт в...", ...] ["Привет! Расскажи немного о себе", "Какой у тебя опыт в...", ...]
""" """
from langchain.schema import HumanMessage, SystemMessage from langchain.schema import HumanMessage, SystemMessage
messages = [ messages = [
SystemMessage(content="Ты HR интервьюер. Говори естественно и дружелюбно."), SystemMessage(
HumanMessage(content=questions_prompt) content="Ты HR интервьюер. Говори естественно и дружелюбно."
),
HumanMessage(content=questions_prompt),
] ]
response = chat_model.get_llm().invoke(messages) response = chat_model.get_llm().invoke(messages)
response_text = response.content.strip() response_text = response.content.strip()
# Парсим JSON ответ # Парсим JSON ответ
if response_text.startswith('[') and response_text.endswith(']'): if response_text.startswith("[") and response_text.endswith("]"):
self.interview_questions = json.loads(response_text) self.interview_questions = json.loads(response_text)
else: else:
# Fallback вопросы # Fallback вопросы
@ -152,94 +158,102 @@ class AIInterviewerService:
"Расскажи о своем самом значимом проекте", "Расскажи о своем самом значимом проекте",
"Какие технологии ты используешь в работе?", "Какие технологии ты используешь в работе?",
"Как ты решаешь сложные задачи?", "Как ты решаешь сложные задачи?",
"Есть ли у тебя вопросы ко мне?" "Есть ли у тебя вопросы ко мне?",
] ]
logger.info(f"Generated {len(self.interview_questions)} interview questions") logger.info(
f"Generated {len(self.interview_questions)} interview questions"
)
except Exception as e: except Exception as e:
logger.error(f"Error generating questions: {str(e)}") logger.error(f"Error generating questions: {str(e)}")
# Используем базовые вопросы # Используем базовые вопросы
self.interview_questions = [ self.interview_questions = [
"Привет! Расскажи о своем опыте", "Привет! Расскажи о своем опыте",
"Что тебя интересует в этой позиции?", "Что тебя интересует в этой позиции?",
"Есть ли у тебя вопросы?" "Есть ли у тебя вопросы?",
] ]
async def start_interview(self): async def start_interview(self):
"""Начало интервью""" """Начало интервью"""
if not self.interview_questions: if not self.interview_questions:
await self.generate_interview_questions() await self.generate_interview_questions()
# Отправляем первый вопрос # Отправляем первый вопрос
await self.ask_next_question() await self.ask_next_question()
async def ask_next_question(self): async def ask_next_question(self):
"""Задать следующий вопрос""" """Задать следующий вопрос"""
if self.current_question_index >= len(self.interview_questions): if self.current_question_index >= len(self.interview_questions):
await self.end_interview() await self.end_interview()
return return
question = self.interview_questions[self.current_question_index] question = self.interview_questions[self.current_question_index]
# Отправляем сообщение фронтенду # Отправляем сообщение фронтенду
await self.send_message({ await self.send_message(
"type": "question", {
"text": question, "type": "question",
"questionNumber": self.current_question_index + 1 "text": question,
}) "questionNumber": self.current_question_index + 1,
}
)
# Конвертируем в речь и воспроизводим # Конвертируем в речь и воспроизводим
# TODO: Реализовать TTS # TODO: Реализовать TTS
# audio_data = await self.text_to_speech(question) # audio_data = await self.text_to_speech(question)
# await self.play_audio(audio_data) # await self.play_audio(audio_data)
self.current_question_index += 1 self.current_question_index += 1
logger.info(f"Asked question {self.current_question_index}: {question}") logger.info(f"Asked question {self.current_question_index}: {question}")
async def process_user_response(self, user_text: str): async def process_user_response(self, user_text: str):
"""Обработка ответа пользователя""" """Обработка ответа пользователя"""
# Сохраняем ответ в историю # Сохраняем ответ в историю
self.conversation_history.append({ self.conversation_history.append(
"type": "user_response", {
"text": user_text, "type": "user_response",
"timestamp": datetime.utcnow().isoformat(), "text": user_text,
"question_index": self.current_question_index - 1 "timestamp": datetime.utcnow().isoformat(),
}) "question_index": self.current_question_index - 1,
}
)
# Можем добавить анализ ответа через LLM # Можем добавить анализ ответа через LLM
# И решить - задать уточняющий вопрос или перейти к следующему # И решить - задать уточняющий вопрос или перейти к следующему
# Пока просто переходим к следующему вопросу # Пока просто переходим к следующему вопросу
await asyncio.sleep(1) # Небольшая пауза await asyncio.sleep(1) # Небольшая пауза
await self.ask_next_question() await self.ask_next_question()
async def send_message(self, message: Dict): async def send_message(self, message: dict):
"""Отправка сообщения фронтенду""" """Отправка сообщения фронтенду"""
if self.room: if self.room:
data = json.dumps(message).encode() data = json.dumps(message).encode()
await self.room.local_participant.publish_data(data) await self.room.local_participant.publish_data(data)
async def play_audio(self, audio_data: bytes): async def play_audio(self, audio_data: bytes):
"""Воспроизведение аудио через LiveKit""" """Воспроизведение аудио через LiveKit"""
if self.audio_source: if self.audio_source:
# TODO: Конвертировать audio_data в нужный формат и отправить # TODO: Конвертировать audio_data в нужный формат и отправить
pass pass
async def end_interview(self): async def end_interview(self):
"""Завершение интервью""" """Завершение интервью"""
await self.send_message({ await self.send_message(
"type": "interview_complete", {
"summary": f"Interview completed with {len(self.conversation_history)} responses" "type": "interview_complete",
}) "summary": f"Interview completed with {len(self.conversation_history)} responses",
}
)
# Сохраняем транскрипт в базу данных # Сохраняем транскрипт в базу данных
transcript = json.dumps(self.conversation_history, ensure_ascii=False, indent=2) transcript = json.dumps(self.conversation_history, ensure_ascii=False, indent=2)
# TODO: Обновить interview_session в БД с транскриптом # TODO: Обновить interview_session в БД с транскриптом
logger.info("Interview completed") logger.info("Interview completed")
# Отключение от комнаты # Отключение от комнаты
if self.room: if self.room:
await self.room.disconnect() await self.room.disconnect()
@ -247,31 +261,32 @@ class AIInterviewerService:
class AIInterviewerManager: class AIInterviewerManager:
"""Менеджер для управления AI интервьюерами""" """Менеджер для управления AI интервьюерами"""
def __init__(self): def __init__(self):
self.active_sessions: Dict[int, AIInterviewerService] = {} self.active_sessions: dict[int, AIInterviewerService] = {}
async def start_interview_session(self, interview_session_id: int, room_name: str, resume_data: Dict): async def start_interview_session(
self, interview_session_id: int, room_name: str, resume_data: dict
):
"""Запуск AI интервьюера для сессии""" """Запуск AI интервьюера для сессии"""
try: try:
# Создаем токен для AI агента # Создаем токен для AI агента
from app.services.interview_service import InterviewRoomService
# Нужно создать специальный токен для AI агента # Нужно создать специальный токен для AI агента
ai_interviewer = AIInterviewerService(interview_session_id, resume_data) ai_interviewer = AIInterviewerService(interview_session_id, resume_data)
# TODO: Генерировать токен для AI агента # TODO: Генерировать токен для AI агента
# ai_token = generate_ai_agent_token(room_name) # ai_token = generate_ai_agent_token(room_name)
# await ai_interviewer.connect_to_room(room_name, ai_token) # await ai_interviewer.connect_to_room(room_name, ai_token)
self.active_sessions[interview_session_id] = ai_interviewer self.active_sessions[interview_session_id] = ai_interviewer
logger.info(f"Started AI interviewer for session: {interview_session_id}") logger.info(f"Started AI interviewer for session: {interview_session_id}")
except Exception as e: except Exception as e:
logger.error(f"Error starting AI interviewer: {str(e)}") logger.error(f"Error starting AI interviewer: {str(e)}")
raise raise
async def stop_interview_session(self, interview_session_id: int): async def stop_interview_session(self, interview_session_id: int):
"""Остановка AI интервьюера""" """Остановка AI интервьюера"""
if interview_session_id in self.active_sessions: if interview_session_id in self.active_sessions:
@ -282,4 +297,4 @@ class AIInterviewerManager:
# Глобальный менеджер # Глобальный менеджер
ai_interviewer_manager = AIInterviewerManager() ai_interviewer_manager = AIInterviewerManager()

View File

@ -1,7 +1,8 @@
from fastapi import UploadFile
from typing import Optional
import tempfile
import os import os
import tempfile
from fastapi import UploadFile
from app.core.s3 import s3_service from app.core.s3 import s3_service
@ -9,29 +10,27 @@ class FileService:
def __init__(self): def __init__(self):
self.s3_service = s3_service self.s3_service = s3_service
async def upload_resume_file(self, file: UploadFile) -> Optional[tuple[str, str]]: async def upload_resume_file(self, file: UploadFile) -> tuple[str, str] | None:
""" """
Загружает резюме в S3 и сохраняет локальную копию для парсинга Загружает резюме в S3 и сохраняет локальную копию для парсинга
Returns: Returns:
tuple[str, str]: (s3_url, local_file_path) или None при ошибке tuple[str, str]: (s3_url, local_file_path) или None при ошибке
""" """
if not file.filename: if not file.filename:
return None return None
content = await file.read() content = await file.read()
content_type = file.content_type or "application/octet-stream" content_type = file.content_type or "application/octet-stream"
# Загружаем в S3 # Загружаем в S3
s3_url = await self.s3_service.upload_file( s3_url = await self.s3_service.upload_file(
file_content=content, file_content=content, file_name=file.filename, content_type=content_type
file_name=file.filename,
content_type=content_type
) )
if not s3_url: if not s3_url:
return None return None
# Сохраняем локальную копию для парсинга # Сохраняем локальную копию для парсинга
try: try:
# Создаем временный файл с сохранением расширения # Создаем временный файл с сохранением расширения
@ -39,43 +38,45 @@ class FileService:
file_extension = os.path.splitext(file.filename)[1] file_extension = os.path.splitext(file.filename)[1]
if not file_extension: if not file_extension:
# Пытаемся определить расширение по MIME типу # Пытаемся определить расширение по MIME типу
if content_type == 'application/pdf': if content_type == "application/pdf":
file_extension = '.pdf' file_extension = ".pdf"
elif content_type in ['application/vnd.openxmlformats-officedocument.wordprocessingml.document']: elif content_type in [
file_extension = '.docx' "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
elif content_type in ['application/msword']: ]:
file_extension = '.doc' file_extension = ".docx"
elif content_type == 'text/plain': elif content_type in ["application/msword"]:
file_extension = '.txt' file_extension = ".doc"
elif content_type == "text/plain":
file_extension = ".txt"
else: else:
file_extension = '.pdf' # fallback file_extension = ".pdf" # fallback
temp_filename = f"resume_{hash(s3_url)}_{file.filename}" temp_filename = f"resume_{hash(s3_url)}_{file.filename}"
local_file_path = os.path.join(temp_dir, temp_filename) local_file_path = os.path.join(temp_dir, temp_filename)
# Сохраняем содержимое файла # Сохраняем содержимое файла
with open(local_file_path, 'wb') as temp_file: with open(local_file_path, "wb") as temp_file:
temp_file.write(content) temp_file.write(content)
return (s3_url, local_file_path) return (s3_url, local_file_path)
except Exception as e: except Exception as e:
print(f"Failed to save local copy: {str(e)}") print(f"Failed to save local copy: {str(e)}")
# Если не удалось сохранить локально, возвращаем только S3 URL # Если не удалось сохранить локально, возвращаем только S3 URL
return (s3_url, s3_url) return (s3_url, s3_url)
async def upload_interview_report(self, file: UploadFile) -> Optional[str]: async def upload_interview_report(self, file: UploadFile) -> str | None:
if not file.filename: if not file.filename:
return None return None
content = await file.read() content = await file.read()
content_type = file.content_type or "application/octet-stream" content_type = file.content_type or "application/octet-stream"
return await self.s3_service.upload_file( return await self.s3_service.upload_file(
file_content=content, file_content=content,
file_name=f"interview_report_{file.filename}", file_name=f"interview_report_{file.filename}",
content_type=content_type content_type=content_type,
) )
async def delete_file(self, file_url: str) -> bool: async def delete_file(self, file_url: str) -> bool:
return await self.s3_service.delete_file(file_url) return await self.s3_service.delete_file(file_url)

View File

@ -1,111 +1,132 @@
from typing import Optional, Annotated
from datetime import datetime
import logging import logging
from datetime import datetime
from typing import Annotated
from fastapi import Depends from fastapi import Depends
from app.models.resume import ResumeStatus
from app.repositories.interview_repository import InterviewRepository from app.repositories.interview_repository import InterviewRepository
from app.repositories.resume_repository import ResumeRepository from app.repositories.resume_repository import ResumeRepository
from app.models.resume import ResumeStatus
logger = logging.getLogger("interview-finalization") logger = logging.getLogger("interview-finalization")
class InterviewFinalizationService: class InterviewFinalizationService:
"""Сервис для завершения интервью и запуска анализа""" """Сервис для завершения интервью и запуска анализа"""
def __init__( def __init__(
self, self,
interview_repo: Annotated[InterviewRepository, Depends(InterviewRepository)], interview_repo: Annotated[InterviewRepository, Depends(InterviewRepository)],
resume_repo: Annotated[ResumeRepository, Depends(ResumeRepository)] resume_repo: Annotated[ResumeRepository, Depends(ResumeRepository)],
): ):
self.interview_repo = interview_repo self.interview_repo = interview_repo
self.resume_repo = resume_repo self.resume_repo = resume_repo
async def finalize_interview( async def finalize_interview(
self, self, room_name: str, dialogue_history: list, interview_metrics: dict = None
room_name: str, ) -> dict | None:
dialogue_history: list,
interview_metrics: dict = None
) -> Optional[dict]:
""" """
Завершает интервью и запускает анализ Завершает интервью и запускает анализ
Args: Args:
room_name: Имя комнаты LiveKit room_name: Имя комнаты LiveKit
dialogue_history: История диалога dialogue_history: История диалога
interview_metrics: Метрики интервью (количество вопросов, время и т.д.) interview_metrics: Метрики интервью (количество вопросов, время и т.д.)
Returns: Returns:
dict с информацией о завершенном интервью или None если ошибка dict с информацией о завершенном интервью или None если ошибка
""" """
try: try:
logger.info(f"[FINALIZE] Starting finalization for room: {room_name}") logger.info(f"[FINALIZE] Starting finalization for room: {room_name}")
# 1. Находим сессию интервью # 1. Находим сессию интервью
interview_session = await self.interview_repo.get_by_room_name(room_name) interview_session = await self.interview_repo.get_by_room_name(room_name)
if not interview_session: if not interview_session:
logger.error(f"[FINALIZE] Interview session not found for room: {room_name}") logger.error(
f"[FINALIZE] Interview session not found for room: {room_name}"
)
return None return None
# 2. Обновляем статус сессии интервью на "completed" # 2. Обновляем статус сессии интервью на "completed"
success = await self.interview_repo.update_status( success = await self.interview_repo.update_status(
interview_session.id, interview_session.id, "completed", datetime.utcnow()
"completed",
datetime.utcnow()
) )
if not success: if not success:
logger.error(f"[FINALIZE] Failed to update session status for {interview_session.id}") logger.error(
f"[FINALIZE] Failed to update session status for {interview_session.id}"
)
return None return None
resume_id = interview_session.resume_id resume_id = interview_session.resume_id
logger.info(f"[FINALIZE] Interview session {interview_session.id} marked as completed") logger.info(
f"[FINALIZE] Interview session {interview_session.id} marked as completed"
)
# 3. Обновляем статус резюме на "INTERVIEWED" # 3. Обновляем статус резюме на "INTERVIEWED"
resume = await self.resume_repo.get(resume_id) resume = await self.resume_repo.get(resume_id)
if resume: if resume:
await self.resume_repo.update(resume_id, { await self.resume_repo.update(
"status": ResumeStatus.INTERVIEWED, resume_id,
"updated_at": datetime.utcnow() {
}) "status": ResumeStatus.INTERVIEWED,
logger.info(f"[FINALIZE] Resume {resume_id} status updated to INTERVIEWED") "updated_at": datetime.utcnow(),
},
)
logger.info(
f"[FINALIZE] Resume {resume_id} status updated to INTERVIEWED"
)
else: else:
logger.warning(f"[FINALIZE] Resume {resume_id} not found") logger.warning(f"[FINALIZE] Resume {resume_id} not found")
# 4. Сохраняем финальную историю диалога # 4. Сохраняем финальную историю диалога
await self.interview_repo.update_dialogue_history(room_name, dialogue_history) await self.interview_repo.update_dialogue_history(
logger.info(f"[FINALIZE] Saved final dialogue ({len(dialogue_history)} messages)") room_name, dialogue_history
)
logger.info(
f"[FINALIZE] Saved final dialogue ({len(dialogue_history)} messages)"
)
# 5. Обновляем статус AI агента # 5. Обновляем статус AI агента
await self.interview_repo.update_ai_agent_status(interview_session.id, None, "stopped") await self.interview_repo.update_ai_agent_status(
interview_session.id, None, "stopped"
)
# 6. Запускаем анализ интервью через Celery # 6. Запускаем анализ интервью через Celery
analysis_task = await self._start_interview_analysis(resume_id) analysis_task = await self._start_interview_analysis(resume_id)
# 7. Собираем итоговые метрики # 7. Собираем итоговые метрики
finalization_result = { finalization_result = {
"session_id": interview_session.id, "session_id": interview_session.id,
"resume_id": resume_id, "resume_id": resume_id,
"room_name": room_name, "room_name": room_name,
"total_messages": len(dialogue_history), "total_messages": len(dialogue_history),
"analysis_task_id": analysis_task.get('task_id') if analysis_task else None, "analysis_task_id": analysis_task.get("task_id")
if analysis_task
else None,
"completed_at": datetime.utcnow().isoformat(), "completed_at": datetime.utcnow().isoformat(),
"metrics": interview_metrics or {} "metrics": interview_metrics or {},
} }
logger.info(f"[FINALIZE] Interview successfully finalized: {finalization_result}") logger.info(
f"[FINALIZE] Interview successfully finalized: {finalization_result}"
)
return finalization_result return finalization_result
except Exception as e: except Exception as e:
logger.error(f"[FINALIZE] Error finalizing interview for room {room_name}: {str(e)}") logger.error(
f"[FINALIZE] Error finalizing interview for room {room_name}: {str(e)}"
)
return None return None
async def _start_interview_analysis(self, resume_id: int): async def _start_interview_analysis(self, resume_id: int):
"""Запускает анализ интервью через Celery""" """Запускает анализ интервью через Celery"""
# try: # try:
logger.info(f"[FINALIZE] Attempting to start analysis task for resume_id: {resume_id}") logger.info(
f"[FINALIZE] Attempting to start analysis task for resume_id: {resume_id}"
# Импортируем задачу )
# Импортируем задачу
# from celery_worker.interview_analysis_task import generate_interview_report # from celery_worker.interview_analysis_task import generate_interview_report
# logger.debug(f"[FINALIZE] Successfully imported generate_interview_report task") # logger.debug(f"[FINALIZE] Successfully imported generate_interview_report task")
# #
@ -126,54 +147,70 @@ class InterviewFinalizationService:
# except Exception as e: # except Exception as e:
# logger.error(f"[FINALIZE] Failed to start analysis task for resume {resume_id}: {str(e)}") # logger.error(f"[FINALIZE] Failed to start analysis task for resume {resume_id}: {str(e)}")
# logger.debug(f"[FINALIZE] Exception type: {type(e).__name__}") # logger.debug(f"[FINALIZE] Exception type: {type(e).__name__}")
# Fallback: попытка запуска анализа через HTTP API для любых других ошибок # Fallback: попытка запуска анализа через HTTP API для любых других ошибок
return await self._start_analysis_via_http(resume_id) return await self._start_analysis_via_http(resume_id)
async def _start_analysis_via_http(self, resume_id: int): async def _start_analysis_via_http(self, resume_id: int):
"""Fallback: запуск анализа через HTTP API (когда Celery недоступен из AI агента)""" """Fallback: запуск анализа через HTTP API (когда Celery недоступен из AI агента)"""
try: try:
import httpx import httpx
url = f"http://localhost:8000/api/v1/analysis/interview-report/{resume_id}" url = f"http://localhost:8000/api/v1/analysis/interview-report/{resume_id}"
logger.info(f"[FINALIZE] Attempting HTTP fallback to URL: {url}") logger.info(f"[FINALIZE] Attempting HTTP fallback to URL: {url}")
# Попробуем отправить HTTP запрос на локальный API для запуска анализа # Попробуем отправить HTTP запрос на локальный API для запуска анализа
async with httpx.AsyncClient() as client: async with httpx.AsyncClient() as client:
response = await client.post(url, timeout=5.0) response = await client.post(url, timeout=5.0)
if response.status_code == 200: if response.status_code == 200:
result = response.json() result = response.json()
logger.info(f"[FINALIZE] Analysis started via HTTP API for resume_id: {resume_id}, task_id: {result.get('task_id', 'unknown')}") logger.info(
f"[FINALIZE] Analysis started via HTTP API for resume_id: {resume_id}, task_id: {result.get('task_id', 'unknown')}"
)
return result return result
else: else:
logger.error(f"[FINALIZE] HTTP API returned {response.status_code} for resume_id: {resume_id}") logger.error(
f"[FINALIZE] HTTP API returned {response.status_code} for resume_id: {resume_id}"
)
logger.debug(f"[FINALIZE] Response body: {response.text[:200]}") logger.debug(f"[FINALIZE] Response body: {response.text[:200]}")
return None return None
except Exception as e: except Exception as e:
logger.error(f"[FINALIZE] HTTP fallback failed for resume {resume_id}: {str(e)}") logger.error(
f"[FINALIZE] HTTP fallback failed for resume {resume_id}: {str(e)}"
)
return None return None
async def save_dialogue_to_session(self, room_name: str, dialogue_history: list) -> bool: async def save_dialogue_to_session(
self, room_name: str, dialogue_history: list
) -> bool:
"""Сохраняет диалог в сессию (для промежуточных сохранений)""" """Сохраняет диалог в сессию (для промежуточных сохранений)"""
try: try:
success = await self.interview_repo.update_dialogue_history(room_name, dialogue_history) success = await self.interview_repo.update_dialogue_history(
room_name, dialogue_history
)
if success: if success:
logger.info(f"[DIALOGUE] Saved {len(dialogue_history)} messages for room: {room_name}") logger.info(
f"[DIALOGUE] Saved {len(dialogue_history)} messages for room: {room_name}"
)
return success return success
except Exception as e: except Exception as e:
logger.error(f"[DIALOGUE] Error saving dialogue for room {room_name}: {str(e)}") logger.error(
f"[DIALOGUE] Error saving dialogue for room {room_name}: {str(e)}"
)
return False return False
async def cleanup_dead_processes(self) -> int: async def cleanup_dead_processes(self) -> int:
"""Очищает информацию о мертвых AI процессах""" """Очищает информацию о мертвых AI процессах"""
try: try:
import psutil import psutil
active_sessions = await self.interview_repo.get_sessions_with_running_agents() active_sessions = (
await self.interview_repo.get_sessions_with_running_agents()
)
cleaned_count = 0 cleaned_count = 0
for session in active_sessions: for session in active_sessions:
if session.ai_agent_pid: if session.ai_agent_pid:
try: try:
@ -188,10 +225,10 @@ class InterviewFinalizationService:
session.id, None, "stopped" session.id, None, "stopped"
) )
cleaned_count += 1 cleaned_count += 1
logger.info(f"[CLEANUP] Cleaned up {cleaned_count} dead processes") logger.info(f"[CLEANUP] Cleaned up {cleaned_count} dead processes")
return cleaned_count return cleaned_count
except Exception as e: except Exception as e:
logger.error(f"[CLEANUP] Error cleaning up processes: {str(e)}") logger.error(f"[CLEANUP] Error cleaning up processes: {str(e)}")
return 0 return 0

View File

@ -1,21 +1,19 @@
import os
import time import time
import uuid import uuid
import json from typing import Annotated
import subprocess
from typing import Optional, Annotated
from datetime import datetime
from livekit.api import AccessToken, VideoGrants
from fastapi import Depends from fastapi import Depends
from livekit.api import AccessToken, VideoGrants
from app.models.interview import (
InterviewSession,
InterviewValidationResponse,
LiveKitTokenResponse,
)
from app.models.resume import ResumeStatus
from app.repositories.interview_repository import InterviewRepository from app.repositories.interview_repository import InterviewRepository
from app.repositories.resume_repository import ResumeRepository from app.repositories.resume_repository import ResumeRepository
from app.models.interview import ( from app.services.agent_manager import agent_manager
InterviewSession,
InterviewSessionCreate,
InterviewValidationResponse,
LiveKitTokenResponse
)
from app.models.resume import Resume, ResumeStatus
from rag.settings import settings from rag.settings import settings
@ -23,224 +21,189 @@ class InterviewRoomService:
def __init__( def __init__(
self, self,
interview_repo: Annotated[InterviewRepository, Depends(InterviewRepository)], interview_repo: Annotated[InterviewRepository, Depends(InterviewRepository)],
resume_repo: Annotated[ResumeRepository, Depends(ResumeRepository)] resume_repo: Annotated[ResumeRepository, Depends(ResumeRepository)],
): ):
self.interview_repo = interview_repo self.interview_repo = interview_repo
self.resume_repo = resume_repo self.resume_repo = resume_repo
self.livekit_url = settings.livekit_url or "ws://localhost:7880" self.livekit_url = settings.livekit_url or "ws://localhost:7880"
self.api_key = settings.livekit_api_key or "devkey" self.api_key = settings.livekit_api_key or "devkey"
self.api_secret = settings.livekit_api_secret or "secret" self.api_secret = settings.livekit_api_secret or "secret"
async def validate_resume_for_interview(self, resume_id: int) -> InterviewValidationResponse: async def validate_resume_for_interview(
self, resume_id: int
) -> InterviewValidationResponse:
"""Проверяет, можно ли проводить собеседование для данного резюме""" """Проверяет, можно ли проводить собеседование для данного резюме"""
try: try:
# Получаем резюме # Получаем резюме
resume = await self.resume_repo.get(resume_id) resume = await self.resume_repo.get(resume_id)
if not resume: if not resume:
return InterviewValidationResponse( return InterviewValidationResponse(
can_interview=False, can_interview=False, message="Resume not found"
message="Resume not found"
) )
# Проверяем статус резюме # Проверяем статус резюме
if resume.status != ResumeStatus.PARSED: if resume.status != ResumeStatus.PARSED:
return InterviewValidationResponse( return InterviewValidationResponse(
can_interview=False, can_interview=False,
message=f"Resume is not ready for interview. Current status: {resume.status}" message=f"Resume is not ready for interview. Current status: {resume.status}",
) )
# Проверяем активную сессию только для информации (не блокируем) # Проверяем активную сессию только для информации (не блокируем)
active_session = await self.interview_repo.get_active_session_by_resume_id(resume_id) active_session = await self.interview_repo.get_active_session_by_resume_id(
resume_id
)
message = "Resume is ready for interview" message = "Resume is ready for interview"
if active_session: if active_session:
message = "Resume has an active interview session" message = "Resume has an active interview session"
return InterviewValidationResponse( return InterviewValidationResponse(can_interview=True, message=message)
can_interview=True,
message=message
)
except Exception as e: except Exception as e:
return InterviewValidationResponse( return InterviewValidationResponse(
can_interview=False, can_interview=False, message=f"Error validating resume: {str(e)}"
message=f"Error validating resume: {str(e)}"
) )
async def create_interview_session(self, resume_id: int) -> Optional[InterviewSession]: async def create_interview_session(self, resume_id: int) -> InterviewSession | None:
"""Создает новую сессию собеседования""" """Создает новую сессию собеседования"""
try: try:
# Генерируем уникальное имя комнаты с UUID # Генерируем уникальное имя комнаты с UUID
unique_id = str(uuid.uuid4())[:8] unique_id = str(uuid.uuid4())[:8]
timestamp = int(time.time()) timestamp = int(time.time())
room_name = f"interview_{resume_id}_{timestamp}_{unique_id}" room_name = f"interview_{resume_id}_{timestamp}_{unique_id}"
# Создаем сессию в БД через репозиторий # Создаем сессию в БД через репозиторий
interview_session = await self.interview_repo.create_interview_session(resume_id, room_name) interview_session = await self.interview_repo.create_interview_session(
resume_id, room_name
)
return interview_session return interview_session
except Exception as e: except Exception as e:
print(f"Error creating interview session: {str(e)}") print(f"Error creating interview session: {str(e)}")
return None return None
def generate_access_token(self, room_name: str, participant_name: str) -> str: def generate_access_token(self, room_name: str, participant_name: str) -> str:
"""Генерирует JWT токен для LiveKit""" """Генерирует JWT токен для LiveKit"""
try: try:
at = AccessToken(self.api_key, self.api_secret) at = AccessToken(self.api_key, self.api_secret)
# Исправляем использование grants # Исправляем использование grants
grants = VideoGrants( grants = VideoGrants(
room_join=True, room_join=True, room=room_name, can_publish=True, can_subscribe=True
room=room_name,
can_publish=True,
can_subscribe=True
) )
at.with_grants(grants).with_identity(participant_name) at.with_grants(grants).with_identity(participant_name)
return at.to_jwt() return at.to_jwt()
except Exception as e: except Exception as e:
print(f"Error generating LiveKit token: {str(e)}") print(f"Error generating LiveKit token: {str(e)}")
raise raise
async def get_livekit_token(self, resume_id: int) -> Optional[LiveKitTokenResponse]: async def get_livekit_token(self, resume_id: int) -> LiveKitTokenResponse | None:
"""Создает сессию собеседования и возвращает токен для LiveKit""" """Создает сессию собеседования и возвращает токен для LiveKit"""
try: try:
# Проверяем доступность агента
if not agent_manager.is_available():
print("[ERROR] AI Agent is not available for new interview")
return None
# Валидируем резюме # Валидируем резюме
validation = await self.validate_resume_for_interview(resume_id) validation = await self.validate_resume_for_interview(resume_id)
if not validation.can_interview: if not validation.can_interview:
return None return None
# Проверяем, есть ли уже созданная сессия для этого резюме # Проверяем, есть ли уже созданная сессия для этого резюме
existing_session = await self.interview_repo.get_active_session_by_resume_id(resume_id) existing_session = (
await self.interview_repo.get_active_session_by_resume_id(resume_id)
)
if existing_session: if existing_session:
# Используем существующую сессию # Используем существующую сессию
interview_session = existing_session interview_session = existing_session
print(f"[DEBUG] Using existing interview session: {interview_session.id}") print(
f"[DEBUG] Using existing interview session: {interview_session.id}"
)
else: else:
# Создаем новую сессию собеседования # Создаем новую сессию собеседования
interview_session = await self.create_interview_session(resume_id) interview_session = await self.create_interview_session(resume_id)
if not interview_session: if not interview_session:
return None return None
print(f"[DEBUG] Created new interview session: {interview_session.id}") print(f"[DEBUG] Created new interview session: {interview_session.id}")
# Генерируем токен # Генерируем токен
participant_name = f"user_{resume_id}" participant_name = f"user_{resume_id}"
token = self.generate_access_token( token = self.generate_access_token(
interview_session.room_name, interview_session.room_name, participant_name
participant_name
) )
# Получаем готовый план интервью для AI агента # Получаем готовый план интервью для AI агента
interview_plan = await self.get_resume_data_for_interview(resume_id) interview_plan = await self.get_resume_data_for_interview(resume_id)
# Обновляем статус сессии на ACTIVE # Обновляем статус сессии на ACTIVE
await self.interview_repo.update_session_status(interview_session.id, "active") await self.interview_repo.update_session_status(
interview_session.id, "active"
# Запускаем AI агента для этой сессии )
await self.start_ai_interviewer(interview_session, interview_plan)
# Назначаем сессию агенту через менеджер
success = await agent_manager.assign_session(
interview_session.id, interview_session.room_name, interview_plan
)
if not success:
print("[ERROR] Failed to assign session to AI agent")
return None
return LiveKitTokenResponse( return LiveKitTokenResponse(
token=token, token=token,
room_name=interview_session.room_name, room_name=interview_session.room_name,
server_url=self.livekit_url server_url=self.livekit_url,
) )
except Exception as e: except Exception as e:
print(f"Error getting LiveKit token: {str(e)}") print(f"Error getting LiveKit token: {str(e)}")
return None return None
async def update_session_status(self, session_id: int, status: str) -> bool: async def update_session_status(self, session_id: int, status: str) -> bool:
"""Обновляет статус сессии собеседования""" """Обновляет статус сессии собеседования"""
return await self.interview_repo.update_session_status(session_id, status) return await self.interview_repo.update_session_status(session_id, status)
async def get_interview_session(self, resume_id: int) -> Optional[InterviewSession]: async def get_interview_session(self, resume_id: int) -> InterviewSession | None:
"""Получает активную сессию собеседования для резюме""" """Получает активную сессию собеседования для резюме"""
return await self.interview_repo.get_active_session_by_resume_id(resume_id) return await self.interview_repo.get_active_session_by_resume_id(resume_id)
async def start_ai_interviewer(self, interview_session: InterviewSession, interview_plan: dict): async def end_interview_session(self, session_id: int) -> bool:
"""Запускает AI интервьюера для сессии""" """Завершает интервью-сессию и освобождает агента"""
try: try:
# Создаем токен для AI агента # Освобождаем агента от текущей сессии
ai_token = self.generate_access_token( await agent_manager.release_session()
interview_session.room_name,
f"ai_interviewer_{interview_session.id}" # Обновляем статус сессии
) await self.interview_repo.update_session_status(session_id, "completed")
# Сохраняем метаданные во временный файл для избежания проблем с кодировкой print(f"[DEBUG] Interview session {session_id} ended successfully")
import tempfile return True
metadata_file = f"interview_metadata_{interview_session.id}.json"
with open(metadata_file, 'w', encoding='utf-8') as f:
json.dump({
"interview_plan": interview_plan,
"session_id": interview_session.id
}, f, ensure_ascii=False, indent=2)
# Запускаем AI агента в отдельном процессе
agent_cmd = [
"uv",
"run",
"ai_interviewer_agent.py",
"connect",
"--room", interview_session.room_name,
"--url", self.livekit_url,
"--api-key", self.api_key,
"--api-secret", self.api_secret,
]
# Устанавливаем переменные окружения
env = os.environ.copy()
env.update({
"INTERVIEW_METADATA_FILE": metadata_file,
"OPENAI_API_KEY": settings.openai_api_key or "",
"DEEPGRAM_API_KEY": settings.deepgram_api_key or "",
"CARTESIA_API_KEY": settings.cartesia_api_key or "",
"PYTHONIOENCODING": "utf-8",
})
# Запускаем процесс в фоне
with open(f"ai_interviewer_{interview_session.id}.log", "wb") as f_out, \
open(f"ai_interviewer_{interview_session.id}.err", "wb") as f_err:
process = subprocess.Popen(
agent_cmd,
env=env,
stdout=f_out,
stderr=f_err,
cwd="."
)
print(f"[DEBUG] Started AI interviewer process {process.pid} for session {interview_session.id}")
# Сохраняем PID процесса в БД для управления
await self.interview_repo.update_ai_agent_status(
interview_session.id,
process.pid,
"running"
)
except Exception as e: except Exception as e:
print(f"Error starting AI interviewer: {str(e)}") print(f"Error ending interview session {session_id}: {str(e)}")
# Обновляем статус на failed return False
await self.interview_repo.update_ai_agent_status(
interview_session.id, def get_agent_status(self) -> dict:
None, """Получает текущий статус AI агента"""
"failed" return agent_manager.get_status()
)
async def get_resume_data_for_interview(self, resume_id: int) -> dict: async def get_resume_data_for_interview(self, resume_id: int) -> dict:
"""Получает готовый план интервью из базы данных""" """Получает готовый план интервью из базы данных"""
try: try:
# Получаем резюме с готовым планом интервью # Получаем резюме с готовым планом интервью
resume = await self.resume_repo.get(resume_id) resume = await self.resume_repo.get(resume_id)
if not resume: if not resume:
return self._get_fallback_interview_plan() return self._get_fallback_interview_plan()
# Если есть готовый план интервью - используем его # Если есть готовый план интервью - используем его
if resume.interview_plan: if resume.interview_plan:
return resume.interview_plan return resume.interview_plan
# Если плана нет, создаем базовый план на основе имеющихся данных # Если плана нет, создаем базовый план на основе имеющихся данных
fallback_plan = { fallback_plan = {
"interview_structure": { "interview_structure": {
@ -250,42 +213,50 @@ class InterviewRoomService:
{ {
"name": "Знакомство", "name": "Знакомство",
"duration_minutes": 5, "duration_minutes": 5,
"questions": ["Расскажи немного о себе", "Что тебя привлекло в этой позиции?"] "questions": [
"Расскажи немного о себе",
"Что тебя привлекло в этой позиции?",
],
}, },
{ {
"name": "Опыт работы", "name": "Опыт работы",
"duration_minutes": 15, "duration_minutes": 15,
"questions": ["Расскажи о своем опыте", "Какие технологии используешь?"] "questions": [
"Расскажи о своем опыте",
"Какие технологии используешь?",
],
}, },
{ {
"name": "Вопросы кандидата", "name": "Вопросы кандидата",
"duration_minutes": 10, "duration_minutes": 10,
"questions": ["Есть ли у тебя вопросы ко мне?"] "questions": ["Есть ли у тебя вопросы ко мне?"],
} },
] ],
}, },
"focus_areas": ["experience", "technical_skills"], "focus_areas": ["experience", "technical_skills"],
"candidate_info": { "candidate_info": {
"name": resume.applicant_name, "name": resume.applicant_name,
"email": resume.applicant_email, "email": resume.applicant_email,
"phone": resume.applicant_phone "phone": resume.applicant_phone,
} },
} }
# Добавляем parsed данные если есть # Добавляем parsed данные если есть
if resume.parsed_data: if resume.parsed_data:
fallback_plan["candidate_info"].update({ fallback_plan["candidate_info"].update(
"skills": resume.parsed_data.get("skills", []), {
"total_years": resume.parsed_data.get("total_years", 0), "skills": resume.parsed_data.get("skills", []),
"education": resume.parsed_data.get("education", "") "total_years": resume.parsed_data.get("total_years", 0),
}) "education": resume.parsed_data.get("education", ""),
}
)
return fallback_plan return fallback_plan
except Exception as e: except Exception as e:
print(f"Error getting interview plan: {str(e)}") print(f"Error getting interview plan: {str(e)}")
return self._get_fallback_interview_plan() return self._get_fallback_interview_plan()
def _get_fallback_interview_plan(self) -> dict: def _get_fallback_interview_plan(self) -> dict:
"""Fallback план интервью если не удалось загрузить из БД""" """Fallback план интервью если не удалось загрузить из БД"""
return { return {
@ -296,98 +267,114 @@ class InterviewRoomService:
{ {
"name": "Знакомство", "name": "Знакомство",
"duration_minutes": 10, "duration_minutes": 10,
"questions": ["Расскажи о себе", "Что тебя привлекло в этой позиции?"] "questions": [
"Расскажи о себе",
"Что тебя привлекло в этой позиции?",
],
}, },
{ {
"name": "Опыт работы", "name": "Опыт работы",
"duration_minutes": 15, "duration_minutes": 15,
"questions": ["Расскажи о своем опыте", "Какие технологии используешь?"] "questions": [
"Расскажи о своем опыте",
"Какие технологии используешь?",
],
}, },
{ {
"name": "Вопросы кандидата", "name": "Вопросы кандидата",
"duration_minutes": 5, "duration_minutes": 5,
"questions": ["Есть ли у тебя вопросы?"] "questions": ["Есть ли у тебя вопросы?"],
} },
] ],
}, },
"focus_areas": ["experience", "technical_skills"], "focus_areas": ["experience", "technical_skills"],
"candidate_info": { "candidate_info": {"name": "Кандидат", "skills": [], "total_years": 0},
"name": "Кандидат",
"skills": [],
"total_years": 0
}
} }
async def update_agent_process_info(self, session_id: int, pid: int = None, status: str = "not_started") -> bool: async def update_agent_process_info(
self, session_id: int, pid: int = None, status: str = "not_started"
) -> bool:
"""Обновляет информацию о процессе AI агента""" """Обновляет информацию о процессе AI агента"""
return await self.interview_repo.update_ai_agent_status(session_id, pid, status) return await self.interview_repo.update_ai_agent_status(session_id, pid, status)
async def get_active_agent_processes(self) -> list: async def get_active_agent_processes(self) -> list:
"""Получает список активных AI процессов""" """Получает список активных AI процессов"""
return await self.interview_repo.get_sessions_with_running_agents() return await self.interview_repo.get_sessions_with_running_agents()
async def stop_agent_process(self, session_id: int) -> bool: async def stop_agent_process(self, session_id: int) -> bool:
"""Останавливает AI процесс для сессии""" """Останавливает AI процесс для сессии"""
try: try:
session = await self.interview_repo.get(session_id) session = await self.interview_repo.get(session_id)
if not session or not session.ai_agent_pid: if not session or not session.ai_agent_pid:
return False return False
import psutil import psutil
try: try:
# Пытаемся gracefully остановить процесс # Пытаемся gracefully остановить процесс
process = psutil.Process(session.ai_agent_pid) process = psutil.Process(session.ai_agent_pid)
process.terminate() process.terminate()
# Ждем завершения до 5 секунд # Ждем завершения до 5 секунд
import time import time
for _ in range(50): for _ in range(50):
if not process.is_running(): if not process.is_running():
break break
time.sleep(0.1) time.sleep(0.1)
# Если не завершился, принудительно убиваем # Если не завершился, принудительно убиваем
if process.is_running(): if process.is_running():
process.kill() process.kill()
# Обновляем статус в БД # Обновляем статус в БД
await self.interview_repo.update_ai_agent_status(session_id, None, "stopped") await self.interview_repo.update_ai_agent_status(
session_id, None, "stopped"
print(f"Stopped AI agent process {session.ai_agent_pid} for session {session_id}") )
print(
f"Stopped AI agent process {session.ai_agent_pid} for session {session_id}"
)
return True return True
except (psutil.NoSuchProcess, psutil.AccessDenied): except (psutil.NoSuchProcess, psutil.AccessDenied):
# Процесс уже не существует # Процесс уже не существует
await self.interview_repo.update_ai_agent_status(session_id, None, "stopped") await self.interview_repo.update_ai_agent_status(
session_id, None, "stopped"
)
return True return True
except Exception as e: except Exception as e:
print(f"Error stopping agent process: {str(e)}") print(f"Error stopping agent process: {str(e)}")
return False return False
async def cleanup_dead_processes(self) -> int: async def cleanup_dead_processes(self) -> int:
"""Очищает информацию о мертвых процессах""" """Очищает информацию о мертвых процессах"""
try: try:
import psutil import psutil
active_sessions = await self.get_active_agent_processes() active_sessions = await self.get_active_agent_processes()
cleaned_count = 0 cleaned_count = 0
for session in active_sessions: for session in active_sessions:
if session.ai_agent_pid: if session.ai_agent_pid:
try: try:
process = psutil.Process(session.ai_agent_pid) process = psutil.Process(session.ai_agent_pid)
if not process.is_running(): if not process.is_running():
await self.interview_repo.update_ai_agent_status(session.id, None, "stopped") await self.interview_repo.update_ai_agent_status(
session.id, None, "stopped"
)
cleaned_count += 1 cleaned_count += 1
except psutil.NoSuchProcess: except psutil.NoSuchProcess:
await self.interview_repo.update_ai_agent_status(session.id, None, "stopped") await self.interview_repo.update_ai_agent_status(
session.id, None, "stopped"
)
cleaned_count += 1 cleaned_count += 1
print(f"Cleaned up {cleaned_count} dead processes") print(f"Cleaned up {cleaned_count} dead processes")
return cleaned_count return cleaned_count
except Exception as e: except Exception as e:
print(f"Error cleaning up processes: {str(e)}") print(f"Error cleaning up processes: {str(e)}")
return 0 return 0

View File

@ -1,43 +1,55 @@
from typing import List, Optional, Annotated from typing import Annotated
from fastapi import Depends from fastapi import Depends
from app.models.resume import Resume, ResumeCreate, ResumeUpdate, ResumeStatus
from app.models.resume import Resume, ResumeCreate, ResumeStatus, ResumeUpdate
from app.repositories.resume_repository import ResumeRepository from app.repositories.resume_repository import ResumeRepository
class ResumeService: class ResumeService:
def __init__(self, repository: Annotated[ResumeRepository, Depends(ResumeRepository)]): def __init__(
self, repository: Annotated[ResumeRepository, Depends(ResumeRepository)]
):
self.repository = repository self.repository = repository
async def create_resume(self, resume_data: ResumeCreate) -> Resume: async def create_resume(self, resume_data: ResumeCreate) -> Resume:
resume = Resume.model_validate(resume_data) resume = Resume.model_validate(resume_data)
return await self.repository.create(resume) return await self.repository.create(resume)
async def create_resume_with_session(self, resume_data: ResumeCreate, session_id: int) -> Resume: async def create_resume_with_session(
self, resume_data: ResumeCreate, session_id: int
) -> Resume:
"""Создать резюме с привязкой к сессии""" """Создать резюме с привязкой к сессии"""
resume_dict = resume_data.model_dump() resume_dict = resume_data.model_dump()
return await self.repository.create_with_session(resume_dict, session_id) return await self.repository.create_with_session(resume_dict, session_id)
async def get_resume(self, resume_id: int) -> Optional[Resume]: async def get_resume(self, resume_id: int) -> Resume | None:
return await self.repository.get(resume_id) return await self.repository.get(resume_id)
async def get_all_resumes(self, skip: int = 0, limit: int = 100) -> List[Resume]: async def get_all_resumes(self, skip: int = 0, limit: int = 100) -> list[Resume]:
return await self.repository.get_all(skip=skip, limit=limit) return await self.repository.get_all(skip=skip, limit=limit)
async def get_resumes_by_vacancy(self, vacancy_id: int) -> List[Resume]: async def get_resumes_by_vacancy(self, vacancy_id: int) -> list[Resume]:
return await self.repository.get_by_vacancy_id(vacancy_id) return await self.repository.get_by_vacancy_id(vacancy_id)
async def get_resumes_by_session(self, session_id: int, skip: int = 0, limit: int = 100) -> List[Resume]: async def get_resumes_by_session(
self, session_id: int, skip: int = 0, limit: int = 100
) -> list[Resume]:
"""Получить резюме пользователя по session_id""" """Получить резюме пользователя по session_id"""
return await self.repository.get_by_session_id(session_id) return await self.repository.get_by_session_id(session_id)
async def get_resumes_by_vacancy_and_session(self, vacancy_id: int, session_id: int) -> List[Resume]: async def get_resumes_by_vacancy_and_session(
self, vacancy_id: int, session_id: int
) -> list[Resume]:
"""Получить резюме пользователя для конкретной вакансии""" """Получить резюме пользователя для конкретной вакансии"""
return await self.repository.get_by_vacancy_and_session(vacancy_id, session_id) return await self.repository.get_by_vacancy_and_session(vacancy_id, session_id)
async def get_resumes_by_status(self, status: ResumeStatus) -> List[Resume]: async def get_resumes_by_status(self, status: ResumeStatus) -> list[Resume]:
return await self.repository.get_by_status(status) return await self.repository.get_by_status(status)
async def update_resume(self, resume_id: int, resume_data: ResumeUpdate) -> Optional[Resume]: async def update_resume(
self, resume_id: int, resume_data: ResumeUpdate
) -> Resume | None:
update_data = resume_data.model_dump(exclude_unset=True) update_data = resume_data.model_dump(exclude_unset=True)
if not update_data: if not update_data:
return await self.repository.get(resume_id) return await self.repository.get(resume_id)
@ -46,8 +58,12 @@ class ResumeService:
async def delete_resume(self, resume_id: int) -> bool: async def delete_resume(self, resume_id: int) -> bool:
return await self.repository.delete(resume_id) return await self.repository.delete(resume_id)
async def update_resume_status(self, resume_id: int, status: ResumeStatus) -> Optional[Resume]: async def update_resume_status(
self, resume_id: int, status: ResumeStatus
) -> Resume | None:
return await self.repository.update_status(resume_id, status) return await self.repository.update_status(resume_id, status)
async def add_interview_report(self, resume_id: int, report_url: str) -> Optional[Resume]: async def add_interview_report(
return await self.repository.add_interview_report(resume_id, report_url) self, resume_id: int, report_url: str
) -> Resume | None:
return await self.repository.add_interview_report(resume_id, report_url)

View File

@ -1,27 +1,35 @@
from typing import List, Optional, Annotated from typing import Annotated
from fastapi import Depends from fastapi import Depends
from app.models.vacancy import Vacancy, VacancyCreate, VacancyUpdate from app.models.vacancy import Vacancy, VacancyCreate, VacancyUpdate
from app.repositories.vacancy_repository import VacancyRepository from app.repositories.vacancy_repository import VacancyRepository
class VacancyService: class VacancyService:
def __init__(self, repository: Annotated[VacancyRepository, Depends(VacancyRepository)]): def __init__(
self, repository: Annotated[VacancyRepository, Depends(VacancyRepository)]
):
self.repository = repository self.repository = repository
async def create_vacancy(self, vacancy_data: VacancyCreate) -> Vacancy: async def create_vacancy(self, vacancy_data: VacancyCreate) -> Vacancy:
vacancy = Vacancy.model_validate(vacancy_data) vacancy = Vacancy.model_validate(vacancy_data)
return await self.repository.create(vacancy) return await self.repository.create(vacancy)
async def get_vacancy(self, vacancy_id: int) -> Optional[Vacancy]: async def get_vacancy(self, vacancy_id: int) -> Vacancy | None:
return await self.repository.get(vacancy_id) return await self.repository.get(vacancy_id)
async def get_all_vacancies(self, skip: int = 0, limit: int = 100) -> List[Vacancy]: async def get_all_vacancies(self, skip: int = 0, limit: int = 100) -> list[Vacancy]:
return await self.repository.get_all(skip=skip, limit=limit) return await self.repository.get_all(skip=skip, limit=limit)
async def get_active_vacancies(self, skip: int = 0, limit: int = 100) -> List[Vacancy]: async def get_active_vacancies(
self, skip: int = 0, limit: int = 100
) -> list[Vacancy]:
return await self.repository.get_active_vacancies(skip=skip, limit=limit) return await self.repository.get_active_vacancies(skip=skip, limit=limit)
async def update_vacancy(self, vacancy_id: int, vacancy_data: VacancyUpdate) -> Optional[Vacancy]: async def update_vacancy(
self, vacancy_id: int, vacancy_data: VacancyUpdate
) -> Vacancy | None:
update_data = vacancy_data.model_dump(exclude_unset=True) update_data = vacancy_data.model_dump(exclude_unset=True)
if not update_data: if not update_data:
return await self.repository.get(vacancy_id) return await self.repository.get(vacancy_id)
@ -30,21 +38,21 @@ class VacancyService:
async def delete_vacancy(self, vacancy_id: int) -> bool: async def delete_vacancy(self, vacancy_id: int) -> bool:
return await self.repository.delete(vacancy_id) return await self.repository.delete(vacancy_id)
async def archive_vacancy(self, vacancy_id: int) -> Optional[Vacancy]: async def archive_vacancy(self, vacancy_id: int) -> Vacancy | None:
return await self.repository.update(vacancy_id, {"is_archived": True}) return await self.repository.update(vacancy_id, {"is_archived": True})
async def search_vacancies( async def search_vacancies(
self, self,
title: Optional[str] = None, title: str | None = None,
company_name: Optional[str] = None, company_name: str | None = None,
area_name: Optional[str] = None, area_name: str | None = None,
skip: int = 0, skip: int = 0,
limit: int = 100 limit: int = 100,
) -> List[Vacancy]: ) -> list[Vacancy]:
return await self.repository.search_vacancies( return await self.repository.search_vacancies(
title=title, title=title,
company_name=company_name, company_name=company_name,
area_name=area_name, area_name=area_name,
skip=skip, skip=skip,
limit=limit limit=limit,
) )

View File

@ -1,11 +1,12 @@
from celery import Celery from celery import Celery
from rag.settings import settings from rag.settings import settings
celery_app = Celery( celery_app = Celery(
"hr_ai_backend", "hr_ai_backend",
broker=f"redis://{settings.redis_cache_url}:{settings.redis_cache_port}/{settings.redis_cache_db}", broker=f"redis://{settings.redis_cache_url}:{settings.redis_cache_port}/{settings.redis_cache_db}",
backend=f"redis://{settings.redis_cache_url}:{settings.redis_cache_port}/{settings.redis_cache_db}", backend=f"redis://{settings.redis_cache_url}:{settings.redis_cache_port}/{settings.redis_cache_db}",
include=["celery_worker.tasks"] include=["celery_worker.tasks"],
) )
celery_app.conf.update( celery_app.conf.update(
@ -14,4 +15,4 @@ celery_app.conf.update(
result_serializer="json", result_serializer="json",
timezone="UTC", timezone="UTC",
enable_utc=True, enable_utc=True,
) )

View File

@ -1,23 +1,22 @@
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from contextlib import contextmanager from contextlib import contextmanager
from rag.settings import settings
from sqlalchemy import create_engine
from sqlalchemy.orm import Session, sessionmaker
from rag.settings import settings
# Создаем синхронный engine для Celery (так как Celery работает в отдельных процессах) # Создаем синхронный engine для Celery (так как Celery работает в отдельных процессах)
sync_engine = create_engine( sync_engine = create_engine(
settings.database_url.replace("asyncpg", "psycopg2"), # Убираем asyncpg для синхронного подключения settings.database_url.replace(
"asyncpg", "psycopg2"
), # Убираем asyncpg для синхронного подключения
echo=False, echo=False,
future=True, future=True,
connect_args={"client_encoding": "utf8"} # Принудительно UTF-8 connect_args={"client_encoding": "utf8"}, # Принудительно UTF-8
) )
# Создаем синхронный session maker # Создаем синхронный session maker
SyncSessionLocal = sessionmaker( SyncSessionLocal = sessionmaker(bind=sync_engine, autocommit=False, autoflush=False)
bind=sync_engine,
autocommit=False,
autoflush=False
)
@contextmanager @contextmanager
@ -36,78 +35,89 @@ def get_sync_session() -> Session:
class SyncResumeRepository: class SyncResumeRepository:
"""Синхронный repository для работы с Resume в Celery tasks""" """Синхронный repository для работы с Resume в Celery tasks"""
def __init__(self, session: Session): def __init__(self, session: Session):
self.session = session self.session = session
def get_by_id(self, resume_id: int): def get_by_id(self, resume_id: int):
"""Получить резюме по ID""" """Получить резюме по ID"""
from app.models.resume import Resume from app.models.resume import Resume
return self.session.query(Resume).filter(Resume.id == resume_id).first() return self.session.query(Resume).filter(Resume.id == resume_id).first()
def update_status(self, resume_id: int, status: str, parsed_data: dict = None, error_message: str = None): def update_status(
self,
resume_id: int,
status: str,
parsed_data: dict = None,
error_message: str = None,
):
"""Обновить статус резюме""" """Обновить статус резюме"""
from app.models.resume import Resume, ResumeStatus
from datetime import datetime from datetime import datetime
from app.models.resume import Resume, ResumeStatus
resume = self.session.query(Resume).filter(Resume.id == resume_id).first() resume = self.session.query(Resume).filter(Resume.id == resume_id).first()
if resume: if resume:
# Обновляем статус # Обновляем статус
if status == 'parsing': if status == "parsing":
resume.status = ResumeStatus.PARSING resume.status = ResumeStatus.PARSING
elif status == 'parsed': elif status == "parsed":
resume.status = ResumeStatus.PARSED resume.status = ResumeStatus.PARSED
if parsed_data: if parsed_data:
resume.parsed_data = parsed_data resume.parsed_data = parsed_data
elif status == 'failed': elif status == "failed":
resume.status = ResumeStatus.PARSE_FAILED resume.status = ResumeStatus.PARSE_FAILED
if error_message: if error_message:
resume.parse_error = error_message resume.parse_error = error_message
resume.updated_at = datetime.utcnow() resume.updated_at = datetime.utcnow()
self.session.add(resume) self.session.add(resume)
return resume return resume
return None return None
def update_interview_plan(self, resume_id: int, interview_plan: dict): def update_interview_plan(self, resume_id: int, interview_plan: dict):
"""Обновить план интервью""" """Обновить план интервью"""
from app.models.resume import Resume
from datetime import datetime from datetime import datetime
from app.models.resume import Resume
resume = self.session.query(Resume).filter(Resume.id == resume_id).first() resume = self.session.query(Resume).filter(Resume.id == resume_id).first()
if resume: if resume:
resume.interview_plan = interview_plan resume.interview_plan = interview_plan
resume.updated_at = datetime.utcnow() resume.updated_at = datetime.utcnow()
self.session.add(resume) self.session.add(resume)
return resume return resume
return None return None
def _normalize_utf8_dict(self, data): def _normalize_utf8_dict(self, data):
"""Нормализует UTF-8 в словаре рекурсивно""" """Нормализует UTF-8 в словаре рекурсивно"""
import json import json
# Сериализуем в JSON с ensure_ascii=False, потом парсим обратно # Сериализуем в JSON с ensure_ascii=False, потом парсим обратно
# Это принудительно конвертирует все unicode escape sequences в нормальные символы # Это принудительно конвертирует все unicode escape sequences в нормальные символы
try: try:
json_str = json.dumps(data, ensure_ascii=False, separators=(',', ':')) json_str = json.dumps(data, ensure_ascii=False, separators=(",", ":"))
return json.loads(json_str) return json.loads(json_str)
except (TypeError, ValueError): except (TypeError, ValueError):
# Fallback - рекурсивная обработка # Fallback - рекурсивная обработка
if isinstance(data, dict): if isinstance(data, dict):
return {key: self._normalize_utf8_dict(value) for key, value in data.items()} return {
key: self._normalize_utf8_dict(value) for key, value in data.items()
}
elif isinstance(data, list): elif isinstance(data, list):
return [self._normalize_utf8_dict(item) for item in data] return [self._normalize_utf8_dict(item) for item in data]
elif isinstance(data, str): elif isinstance(data, str):
try: try:
# Пытаемся декодировать unicode escape sequences # Пытаемся декодировать unicode escape sequences
if '\\u' in data: if "\\u" in data:
return data.encode().decode('unicode_escape') return data.encode().decode("unicode_escape")
return data return data
except (UnicodeDecodeError, UnicodeEncodeError): except (UnicodeDecodeError, UnicodeEncodeError):
return data return data
else: else:
return data return data

View File

@ -1,12 +1,12 @@
# -*- coding: utf-8 -*-
import json import json
import logging import logging
from datetime import datetime from datetime import datetime
from typing import Dict, Any, List, Optional from typing import Any
from celery import shared_task from celery import shared_task
from celery_worker.database import SyncResumeRepository, get_sync_session
from rag.settings import settings from rag.settings import settings
from celery_worker.database import get_sync_session, SyncResumeRepository
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -15,46 +15,52 @@ logger = logging.getLogger(__name__)
def generate_interview_report(resume_id: int): def generate_interview_report(resume_id: int):
""" """
Комплексная оценка кандидата на основе резюме, вакансии и диалога интервью Комплексная оценка кандидата на основе резюме, вакансии и диалога интервью
Args: Args:
resume_id: ID резюме для анализа resume_id: ID резюме для анализа
Returns: Returns:
dict: Полный отчет с оценками и рекомендациями dict: Полный отчет с оценками и рекомендациями
""" """
logger.info(f"[INTERVIEW_ANALYSIS] Starting analysis for resume_id: {resume_id}") logger.info(f"[INTERVIEW_ANALYSIS] Starting analysis for resume_id: {resume_id}")
try: try:
with get_sync_session() as db: with get_sync_session() as db:
repo = SyncResumeRepository(db) repo = SyncResumeRepository(db)
# Получаем данные резюме # Получаем данные резюме
resume = repo.get_by_id(resume_id) resume = repo.get_by_id(resume_id)
if not resume: if not resume:
logger.error(f"[INTERVIEW_ANALYSIS] Resume {resume_id} not found") logger.error(f"[INTERVIEW_ANALYSIS] Resume {resume_id} not found")
return {"error": "Resume not found"} return {"error": "Resume not found"}
# Получаем данные вакансии (если нет - используем пустые данные) # Получаем данные вакансии (если нет - используем пустые данные)
vacancy = _get_vacancy_data(db, resume.vacancy_id) vacancy = _get_vacancy_data(db, resume.vacancy_id)
if not vacancy: if not vacancy:
logger.warning(f"[INTERVIEW_ANALYSIS] Vacancy {resume.vacancy_id} not found, using empty vacancy data") logger.warning(
f"[INTERVIEW_ANALYSIS] Vacancy {resume.vacancy_id} not found, using empty vacancy data"
)
vacancy = { vacancy = {
'id': resume.vacancy_id, "id": resume.vacancy_id,
'title': 'Неизвестная позиция', "title": "Неизвестная позиция",
'description': 'Описание недоступно', "description": "Описание недоступно",
'requirements': [], "requirements": [],
'skills_required': [], "skills_required": [],
'experience_level': 'middle' "experience_level": "middle",
} }
# Получаем историю интервью # Получаем историю интервью
interview_session = _get_interview_session(db, resume_id) interview_session = _get_interview_session(db, resume_id)
# Парсим JSON данные # Парсим JSON данные
parsed_resume = _parse_json_field(resume.parsed_data) parsed_resume = _parse_json_field(resume.parsed_data)
interview_plan = _parse_json_field(resume.interview_plan) interview_plan = _parse_json_field(resume.interview_plan)
dialogue_history = _parse_json_field(interview_session.dialogue_history) if interview_session else [] dialogue_history = (
_parse_json_field(interview_session.dialogue_history)
if interview_session
else []
)
# Генерируем отчет # Генерируем отчет
report = _generate_comprehensive_report( report = _generate_comprehensive_report(
resume_id=resume_id, resume_id=resume_id,
@ -62,24 +68,29 @@ def generate_interview_report(resume_id: int):
vacancy=vacancy, vacancy=vacancy,
parsed_resume=parsed_resume, parsed_resume=parsed_resume,
interview_plan=interview_plan, interview_plan=interview_plan,
dialogue_history=dialogue_history dialogue_history=dialogue_history,
) )
# Сохраняем отчет в БД # Сохраняем отчет в БД
_save_report_to_db(db, resume_id, report) _save_report_to_db(db, resume_id, report)
logger.info(f"[INTERVIEW_ANALYSIS] Analysis completed for resume_id: {resume_id}, score: {report['overall_score']}") logger.info(
f"[INTERVIEW_ANALYSIS] Analysis completed for resume_id: {resume_id}, score: {report['overall_score']}"
)
return report return report
except Exception as e: except Exception as e:
logger.error(f"[INTERVIEW_ANALYSIS] Error analyzing resume {resume_id}: {str(e)}") logger.error(
f"[INTERVIEW_ANALYSIS] Error analyzing resume {resume_id}: {str(e)}"
)
return {"error": str(e)} return {"error": str(e)}
def _get_vacancy_data(db, vacancy_id: int) -> Optional[Dict]: def _get_vacancy_data(db, vacancy_id: int) -> dict | None:
"""Получить данные вакансии""" """Получить данные вакансии"""
try: try:
from app.models.vacancy import Vacancy from app.models.vacancy import Vacancy
vacancy = db.query(Vacancy).filter(Vacancy.id == vacancy_id).first() vacancy = db.query(Vacancy).filter(Vacancy.id == vacancy_id).first()
if vacancy: if vacancy:
# Парсим key_skills в список, если это строка # Парсим key_skills в список, если это строка
@ -87,28 +98,36 @@ def _get_vacancy_data(db, vacancy_id: int) -> Optional[Dict]:
if vacancy.key_skills: if vacancy.key_skills:
if isinstance(vacancy.key_skills, str): if isinstance(vacancy.key_skills, str):
# Разделяем по запятым и очищаем от пробелов # Разделяем по запятым и очищаем от пробелов
key_skills = [skill.strip() for skill in vacancy.key_skills.split(',') if skill.strip()] key_skills = [
skill.strip()
for skill in vacancy.key_skills.split(",")
if skill.strip()
]
elif isinstance(vacancy.key_skills, list): elif isinstance(vacancy.key_skills, list):
key_skills = vacancy.key_skills key_skills = vacancy.key_skills
# Маппинг Experience enum в строку уровня опыта # Маппинг Experience enum в строку уровня опыта
experience_mapping = { experience_mapping = {
'noExperience': 'junior', "noExperience": "junior",
'between1And3': 'junior', "between1And3": "junior",
'between3And6': 'middle', "between3And6": "middle",
'moreThan6': 'senior' "moreThan6": "senior",
} }
experience_level = experience_mapping.get(vacancy.experience, 'middle') experience_level = experience_mapping.get(vacancy.experience, "middle")
return { return {
'id': vacancy.id, "id": vacancy.id,
'title': vacancy.title, "title": vacancy.title,
'description': vacancy.description, "description": vacancy.description,
'requirements': [vacancy.description] if vacancy.description else [], # Используем описание как требования "requirements": [vacancy.description]
'skills_required': key_skills, if vacancy.description
'experience_level': experience_level, else [], # Используем описание как требования
'employment_type': vacancy.employment_type, "skills_required": key_skills,
'salary_range': f"{vacancy.salary_from or 0}-{vacancy.salary_to or 0}" if vacancy.salary_from or vacancy.salary_to else None "experience_level": experience_level,
"employment_type": vacancy.employment_type,
"salary_range": f"{vacancy.salary_from or 0}-{vacancy.salary_to or 0}"
if vacancy.salary_from or vacancy.salary_to
else None,
} }
return None return None
except Exception as e: except Exception as e:
@ -120,13 +139,18 @@ def _get_interview_session(db, resume_id: int):
"""Получить сессию интервью""" """Получить сессию интервью"""
try: try:
from app.models.interview import InterviewSession from app.models.interview import InterviewSession
return db.query(InterviewSession).filter(InterviewSession.resume_id == resume_id).first()
return (
db.query(InterviewSession)
.filter(InterviewSession.resume_id == resume_id)
.first()
)
except Exception as e: except Exception as e:
logger.error(f"Error getting interview session: {e}") logger.error(f"Error getting interview session: {e}")
return None return None
def _parse_json_field(field_data) -> Dict: def _parse_json_field(field_data) -> dict:
"""Безопасный парсинг JSON поля""" """Безопасный парсинг JSON поля"""
if field_data is None: if field_data is None:
return {} return {}
@ -143,138 +167,139 @@ def _parse_json_field(field_data) -> Dict:
def _generate_comprehensive_report( def _generate_comprehensive_report(
resume_id: int, resume_id: int,
candidate_name: str, candidate_name: str,
vacancy: Dict, vacancy: dict,
parsed_resume: Dict, parsed_resume: dict,
interview_plan: Dict, interview_plan: dict,
dialogue_history: List[Dict] dialogue_history: list[dict],
) -> Dict[str, Any]: ) -> dict[str, Any]:
""" """
Генерирует комплексный отчет о кандидате с использованием LLM Генерирует комплексный отчет о кандидате с использованием LLM
""" """
# Подготавливаем контекст для анализа # Подготавливаем контекст для анализа
context = _prepare_analysis_context( context = _prepare_analysis_context(
vacancy=vacancy, vacancy=vacancy,
parsed_resume=parsed_resume, parsed_resume=parsed_resume,
interview_plan=interview_plan, interview_plan=interview_plan,
dialogue_history=dialogue_history dialogue_history=dialogue_history,
) )
# Генерируем оценку через OpenAI # Генерируем оценку через OpenAI
evaluation = _call_openai_for_evaluation(context) evaluation = _call_openai_for_evaluation(context)
# Формируем финальный отчет # Формируем финальный отчет
report = { report = {
"resume_id": resume_id, "resume_id": resume_id,
"candidate_name": candidate_name, "candidate_name": candidate_name,
"position": vacancy.get('title', 'Unknown Position'), "position": vacancy.get("title", "Unknown Position"),
"interview_date": datetime.utcnow().isoformat(), "interview_date": datetime.utcnow().isoformat(),
"analysis_context": { "analysis_context": {
"has_parsed_resume": bool(parsed_resume), "has_parsed_resume": bool(parsed_resume),
"has_interview_plan": bool(interview_plan), "has_interview_plan": bool(interview_plan),
"dialogue_messages_count": len(dialogue_history), "dialogue_messages_count": len(dialogue_history),
"vacancy_requirements_count": len(vacancy.get('requirements', [])) "vacancy_requirements_count": len(vacancy.get("requirements", [])),
} },
} }
# Добавляем результаты оценки # Добавляем результаты оценки
if evaluation: if evaluation:
# Убеждаемся, что есть overall_score # Убеждаемся, что есть overall_score
if 'overall_score' not in evaluation: if "overall_score" not in evaluation:
evaluation['overall_score'] = _calculate_overall_score(evaluation) evaluation["overall_score"] = _calculate_overall_score(evaluation)
report.update(evaluation) report.update(evaluation)
else: else:
# Fallback оценка, если LLM не сработал # Fallback оценка, если LLM не сработал
report.update(_generate_fallback_evaluation( report.update(
parsed_resume, vacancy, dialogue_history _generate_fallback_evaluation(parsed_resume, vacancy, dialogue_history)
)) )
return report return report
def _calculate_overall_score(evaluation: Dict) -> int: def _calculate_overall_score(evaluation: dict) -> int:
"""Вычисляет общий балл как среднее арифметическое всех критериев""" """Вычисляет общий балл как среднее арифметическое всех критериев"""
try: try:
scores = evaluation.get('scores', {}) scores = evaluation.get("scores", {})
if not scores: if not scores:
return 50 # Default score return 50 # Default score
total_score = 0 total_score = 0
count = 0 count = 0
for criterion_name, criterion_data in scores.items(): for criterion_name, criterion_data in scores.items():
if isinstance(criterion_data, dict) and 'score' in criterion_data: if isinstance(criterion_data, dict) and "score" in criterion_data:
total_score += criterion_data['score'] total_score += criterion_data["score"]
count += 1 count += 1
if count == 0: if count == 0:
return 50 # Default if no valid scores return 50 # Default if no valid scores
overall = int(total_score / count) overall = int(total_score / count)
return max(0, min(100, overall)) # Ensure 0-100 range return max(0, min(100, overall)) # Ensure 0-100 range
except Exception: except Exception:
return 50 # Safe fallback return 50 # Safe fallback
def _prepare_analysis_context( def _prepare_analysis_context(
vacancy: Dict, vacancy: dict,
parsed_resume: Dict, parsed_resume: dict,
interview_plan: Dict, interview_plan: dict,
dialogue_history: List[Dict] dialogue_history: list[dict],
) -> str: ) -> str:
"""Подготавливает контекст для анализа LLM""" """Подготавливает контекст для анализа LLM"""
# Собираем диалог интервью # Собираем диалог интервью
dialogue_text = "" dialogue_text = ""
if dialogue_history: if dialogue_history:
dialogue_messages = [] dialogue_messages = []
for msg in dialogue_history[-20:]: # Последние 20 сообщений for msg in dialogue_history[-20:]: # Последние 20 сообщений
role = msg.get('role', 'unknown') role = msg.get("role", "unknown")
content = msg.get('content', '') content = msg.get("content", "")
dialogue_messages.append(f"{role.upper()}: {content}") dialogue_messages.append(f"{role.upper()}: {content}")
dialogue_text = "\n".join(dialogue_messages) dialogue_text = "\n".join(dialogue_messages)
# Формируем контекст # Формируем контекст
context = f""" context = f"""
АНАЛИЗ КАНДИДАТА НА СОБЕСЕДОВАНИЕ АНАЛИЗ КАНДИДАТА НА СОБЕСЕДОВАНИЕ
ВАКАНСИЯ: ВАКАНСИЯ:
- Позиция: {vacancy.get('title', 'Не указана')} - Позиция: {vacancy.get("title", "Не указана")}
- Описание: {vacancy.get('description', 'Не указано')[:500]} - Описание: {vacancy.get("description", "Не указано")[:500]}
- Требования: {', '.join(vacancy.get('requirements', []))} - Требования: {", ".join(vacancy.get("requirements", []))}
- Требуемые навыки: {', '.join(vacancy.get('skills_required', []))} - Требуемые навыки: {", ".join(vacancy.get("skills_required", []))}
- Уровень опыта: {vacancy.get('experience_level', 'middle')} - Уровень опыта: {vacancy.get("experience_level", "middle")}
РЕЗЮМЕ КАНДИДАТА: РЕЗЮМЕ КАНДИДАТА:
- Имя: {parsed_resume.get('name', 'Не указано')} - Имя: {parsed_resume.get("name", "Не указано")}
- Опыт работы: {parsed_resume.get('total_years', 'Не указано')} лет - Опыт работы: {parsed_resume.get("total_years", "Не указано")} лет
- Навыки: {', '.join(parsed_resume.get('skills', []))} - Навыки: {", ".join(parsed_resume.get("skills", []))}
- Образование: {parsed_resume.get('education', 'Не указано')} - Образование: {parsed_resume.get("education", "Не указано")}
- Предыдущие позиции: {'; '.join([pos.get('title', '') + ' в ' + pos.get('company', '') for pos in parsed_resume.get('work_experience', [])])} - Предыдущие позиции: {"; ".join([pos.get("title", "") + " в " + pos.get("company", "") for pos in parsed_resume.get("work_experience", [])])}
ПЛАН ИНТЕРВЬЮ: ПЛАН ИНТЕРВЬЮ:
{json.dumps(interview_plan, ensure_ascii=False, indent=2) if interview_plan else 'План интервью не найден'} {json.dumps(interview_plan, ensure_ascii=False, indent=2) if interview_plan else "План интервью не найден"}
ДИАЛОГ ИНТЕРВЬЮ: ДИАЛОГ ИНТЕРВЬЮ:
{dialogue_text if dialogue_text else 'Диалог интервью не найден или пуст'} {dialogue_text if dialogue_text else "Диалог интервью не найден или пуст"}
""" """
return context return context
def _call_openai_for_evaluation(context: str) -> Optional[Dict]: def _call_openai_for_evaluation(context: str) -> dict | None:
"""Вызывает OpenAI для генерации оценки""" """Вызывает OpenAI для генерации оценки"""
if not settings.openai_api_key: if not settings.openai_api_key:
logger.warning("OpenAI API key not configured, skipping LLM evaluation") logger.warning("OpenAI API key not configured, skipping LLM evaluation")
return None return None
try: try:
import openai import openai
openai.api_key = settings.openai_api_key openai.api_key = settings.openai_api_key
evaluation_prompt = f""" evaluation_prompt = f"""
{context} {context}
@ -313,35 +338,35 @@ def _call_openai_for_evaluation(context: str) -> Optional[Dict]:
model="gpt-4o-mini", model="gpt-4o-mini",
messages=[{"role": "user", "content": evaluation_prompt}], messages=[{"role": "user", "content": evaluation_prompt}],
response_format={"type": "json_object"}, response_format={"type": "json_object"},
temperature=0.3 temperature=0.3,
) )
evaluation = json.loads(response.choices[0].message.content) evaluation = json.loads(response.choices[0].message.content)
logger.info(f"[INTERVIEW_ANALYSIS] OpenAI evaluation completed") logger.info("[INTERVIEW_ANALYSIS] OpenAI evaluation completed")
return evaluation return evaluation
except Exception as e: except Exception as e:
logger.error(f"[INTERVIEW_ANALYSIS] Error calling OpenAI: {str(e)}") logger.error(f"[INTERVIEW_ANALYSIS] Error calling OpenAI: {str(e)}")
return None return None
def _generate_fallback_evaluation( def _generate_fallback_evaluation(
parsed_resume: Dict, parsed_resume: dict, vacancy: dict, dialogue_history: list[dict]
vacancy: Dict, ) -> dict[str, Any]:
dialogue_history: List[Dict]
) -> Dict[str, Any]:
"""Генерирует базовую оценку без LLM""" """Генерирует базовую оценку без LLM"""
# Простая эвристическая оценка # Простая эвристическая оценка
technical_score = _calculate_technical_match(parsed_resume, vacancy) technical_score = _calculate_technical_match(parsed_resume, vacancy)
experience_score = _calculate_experience_score(parsed_resume, vacancy) experience_score = _calculate_experience_score(parsed_resume, vacancy)
communication_score = 70 # Средняя оценка, если нет диалога communication_score = 70 # Средняя оценка, если нет диалога
if dialogue_history: if dialogue_history:
communication_score = min(90, 50 + len(dialogue_history) * 2) # Больше диалога = лучше коммуникация communication_score = min(
90, 50 + len(dialogue_history) * 2
) # Больше диалога = лучше коммуникация
overall_score = (technical_score + experience_score + communication_score) // 3 overall_score = (technical_score + experience_score + communication_score) // 3
# Определяем рекомендацию # Определяем рекомендацию
if overall_score >= 90: if overall_score >= 90:
recommendation = "strongly_recommend" recommendation = "strongly_recommend"
@ -351,84 +376,86 @@ def _generate_fallback_evaluation(
recommendation = "consider" recommendation = "consider"
else: else:
recommendation = "reject" recommendation = "reject"
return { return {
"scores": { "scores": {
"technical_skills": { "technical_skills": {
"score": technical_score, "score": technical_score,
"justification": f"Соответствие по навыкам: {technical_score}%", "justification": f"Соответствие по навыкам: {technical_score}%",
"concerns": "Автоматическая оценка без анализа LLM" "concerns": "Автоматическая оценка без анализа LLM",
}, },
"experience_relevance": { "experience_relevance": {
"score": experience_score, "score": experience_score,
"justification": f"Опыт работы: {parsed_resume.get('total_years', 0)} лет", "justification": f"Опыт работы: {parsed_resume.get('total_years', 0)} лет",
"concerns": "Требуется ручная проверка релевантности опыта" "concerns": "Требуется ручная проверка релевантности опыта",
}, },
"communication": { "communication": {
"score": communication_score, "score": communication_score,
"justification": f"Активность в диалоге: {len(dialogue_history)} сообщений", "justification": f"Активность в диалоге: {len(dialogue_history)} сообщений",
"concerns": "Оценка основана на количестве сообщений" "concerns": "Оценка основана на количестве сообщений",
}, },
"problem_solving": { "problem_solving": {
"score": 60, "score": 60,
"justification": "Средняя оценка (нет данных для анализа)", "justification": "Средняя оценка (нет данных для анализа)",
"concerns": "Требуется техническое интервью" "concerns": "Требуется техническое интервью",
}, },
"cultural_fit": { "cultural_fit": {
"score": 65, "score": 65,
"justification": "Средняя оценка (нет данных для анализа)", "justification": "Средняя оценка (нет данных для анализа)",
"concerns": "Требуется личная встреча с командой" "concerns": "Требуется личная встреча с командой",
} },
}, },
"overall_score": overall_score, "overall_score": overall_score,
"recommendation": recommendation, "recommendation": recommendation,
"strengths": [ "strengths": [
f"Опыт работы: {parsed_resume.get('total_years', 0)} лет", f"Опыт работы: {parsed_resume.get('total_years', 0)} лет",
f"Технические навыки: {len(parsed_resume.get('skills', []))} навыков", f"Технические навыки: {len(parsed_resume.get('skills', []))} навыков",
f"Участие в интервью: {len(dialogue_history)} сообщений" f"Участие в интервью: {len(dialogue_history)} сообщений",
], ],
"weaknesses": [ "weaknesses": [
"Автоматическая оценка без LLM анализа", "Автоматическая оценка без LLM анализа",
"Требуется дополнительное техническое интервью", "Требуется дополнительное техническое интервью",
"Нет глубокого анализа ответов на вопросы" "Нет глубокого анализа ответов на вопросы",
], ],
"red_flags": [], "red_flags": [],
"next_steps": "Рекомендуется провести техническое интервью с тимлидом для более точной оценки.", "next_steps": "Рекомендуется провести техническое интервью с тимлидом для более точной оценки.",
"analysis_method": "fallback_heuristic" "analysis_method": "fallback_heuristic",
} }
def _calculate_technical_match(parsed_resume: Dict, vacancy: Dict) -> int: def _calculate_technical_match(parsed_resume: dict, vacancy: dict) -> int:
"""Вычисляет соответствие технических навыков""" """Вычисляет соответствие технических навыков"""
resume_skills = set([skill.lower() for skill in parsed_resume.get('skills', [])]) resume_skills = set([skill.lower() for skill in parsed_resume.get("skills", [])])
required_skills = set([skill.lower() for skill in vacancy.get('skills_required', [])]) required_skills = set(
[skill.lower() for skill in vacancy.get("skills_required", [])]
)
if not required_skills: if not required_skills:
return 70 # Если требования не указаны return 70 # Если требования не указаны
matching_skills = resume_skills.intersection(required_skills) matching_skills = resume_skills.intersection(required_skills)
match_percentage = (len(matching_skills) / len(required_skills)) * 100 match_percentage = (len(matching_skills) / len(required_skills)) * 100
return min(100, int(match_percentage)) return min(100, int(match_percentage))
def _calculate_experience_score(parsed_resume: Dict, vacancy: Dict) -> int: def _calculate_experience_score(parsed_resume: dict, vacancy: dict) -> int:
"""Вычисляет оценку релевантности опыта""" """Вычисляет оценку релевантности опыта"""
years_experience = parsed_resume.get('total_years', 0) years_experience = parsed_resume.get("total_years", 0)
required_level = vacancy.get('experience_level', 'middle') required_level = vacancy.get("experience_level", "middle")
# Маппинг уровней на годы опыта # Маппинг уровней на годы опыта
level_mapping = { level_mapping = {
'junior': (0, 2), "junior": (0, 2),
'middle': (2, 5), "middle": (2, 5),
'senior': (5, 10), "senior": (5, 10),
'lead': (8, 15) "lead": (8, 15),
} }
min_years, max_years = level_mapping.get(required_level, (2, 5)) min_years, max_years = level_mapping.get(required_level, (2, 5))
if years_experience < min_years: if years_experience < min_years:
# Недостаток опыта # Недостаток опыта
return max(30, int(70 * (years_experience / min_years))) return max(30, int(70 * (years_experience / min_years)))
@ -440,194 +467,248 @@ def _calculate_experience_score(parsed_resume: Dict, vacancy: Dict) -> int:
return 90 return 90
def _save_report_to_db(db, resume_id: int, report: Dict): def _save_report_to_db(db, resume_id: int, report: dict):
"""Сохраняет отчет в базу данных в таблицу interview_reports""" """Сохраняет отчет в базу данных в таблицу interview_reports"""
try: try:
from app.models.interview import InterviewSession from app.models.interview import InterviewSession
from app.models.interview_report import InterviewReport, RecommendationType from app.models.interview_report import InterviewReport
# Находим сессию интервью по resume_id # Находим сессию интервью по resume_id
interview_session = db.query(InterviewSession).filter( interview_session = (
InterviewSession.resume_id == resume_id db.query(InterviewSession)
).first() .filter(InterviewSession.resume_id == resume_id)
.first()
)
if not interview_session: if not interview_session:
logger.warning(f"[INTERVIEW_ANALYSIS] No interview session found for resume_id: {resume_id}") logger.warning(
f"[INTERVIEW_ANALYSIS] No interview session found for resume_id: {resume_id}"
)
return return
# Проверяем, есть ли уже отчет для этой сессии # Проверяем, есть ли уже отчет для этой сессии
existing_report = db.query(InterviewReport).filter( existing_report = (
InterviewReport.interview_session_id == interview_session.id db.query(InterviewReport)
).first() .filter(InterviewReport.interview_session_id == interview_session.id)
.first()
)
if existing_report: if existing_report:
logger.info(f"[INTERVIEW_ANALYSIS] Updating existing report for session: {interview_session.id}") logger.info(
f"[INTERVIEW_ANALYSIS] Updating existing report for session: {interview_session.id}"
)
# Обновляем существующий отчет # Обновляем существующий отчет
_update_report_from_dict(existing_report, report) _update_report_from_dict(existing_report, report)
existing_report.updated_at = datetime.utcnow() existing_report.updated_at = datetime.utcnow()
db.add(existing_report) db.add(existing_report)
else: else:
logger.info(f"[INTERVIEW_ANALYSIS] Creating new report for session: {interview_session.id}") logger.info(
f"[INTERVIEW_ANALYSIS] Creating new report for session: {interview_session.id}"
)
# Создаем новый отчет # Создаем новый отчет
new_report = _create_report_from_dict(interview_session.id, report) new_report = _create_report_from_dict(interview_session.id, report)
db.add(new_report) db.add(new_report)
logger.info(f"[INTERVIEW_ANALYSIS] Report saved for resume_id: {resume_id}, session: {interview_session.id}") logger.info(
f"[INTERVIEW_ANALYSIS] Report saved for resume_id: {resume_id}, session: {interview_session.id}"
)
except Exception as e: except Exception as e:
logger.error(f"[INTERVIEW_ANALYSIS] Error saving report: {str(e)}") logger.error(f"[INTERVIEW_ANALYSIS] Error saving report: {str(e)}")
def _create_report_from_dict(interview_session_id: int, report: Dict) -> 'InterviewReport': def _create_report_from_dict(
interview_session_id: int, report: dict
) -> "InterviewReport":
"""Создает объект InterviewReport из словаря отчета""" """Создает объект InterviewReport из словаря отчета"""
from app.models.interview_report import InterviewReport, RecommendationType from app.models.interview_report import InterviewReport, RecommendationType
# Извлекаем баллы по критериям # Извлекаем баллы по критериям
scores = report.get('scores', {}) scores = report.get("scores", {})
return InterviewReport( return InterviewReport(
interview_session_id=interview_session_id, interview_session_id=interview_session_id,
# Основные критерии оценки # Основные критерии оценки
technical_skills_score=scores.get('technical_skills', {}).get('score', 0), technical_skills_score=scores.get("technical_skills", {}).get("score", 0),
technical_skills_justification=scores.get('technical_skills', {}).get('justification', ''), technical_skills_justification=scores.get("technical_skills", {}).get(
technical_skills_concerns=scores.get('technical_skills', {}).get('concerns', ''), "justification", ""
),
experience_relevance_score=scores.get('experience_relevance', {}).get('score', 0), technical_skills_concerns=scores.get("technical_skills", {}).get(
experience_relevance_justification=scores.get('experience_relevance', {}).get('justification', ''), "concerns", ""
experience_relevance_concerns=scores.get('experience_relevance', {}).get('concerns', ''), ),
experience_relevance_score=scores.get("experience_relevance", {}).get(
communication_score=scores.get('communication', {}).get('score', 0), "score", 0
communication_justification=scores.get('communication', {}).get('justification', ''), ),
communication_concerns=scores.get('communication', {}).get('concerns', ''), experience_relevance_justification=scores.get("experience_relevance", {}).get(
"justification", ""
problem_solving_score=scores.get('problem_solving', {}).get('score', 0), ),
problem_solving_justification=scores.get('problem_solving', {}).get('justification', ''), experience_relevance_concerns=scores.get("experience_relevance", {}).get(
problem_solving_concerns=scores.get('problem_solving', {}).get('concerns', ''), "concerns", ""
),
cultural_fit_score=scores.get('cultural_fit', {}).get('score', 0), communication_score=scores.get("communication", {}).get("score", 0),
cultural_fit_justification=scores.get('cultural_fit', {}).get('justification', ''), communication_justification=scores.get("communication", {}).get(
cultural_fit_concerns=scores.get('cultural_fit', {}).get('concerns', ''), "justification", ""
),
communication_concerns=scores.get("communication", {}).get("concerns", ""),
problem_solving_score=scores.get("problem_solving", {}).get("score", 0),
problem_solving_justification=scores.get("problem_solving", {}).get(
"justification", ""
),
problem_solving_concerns=scores.get("problem_solving", {}).get("concerns", ""),
cultural_fit_score=scores.get("cultural_fit", {}).get("score", 0),
cultural_fit_justification=scores.get("cultural_fit", {}).get(
"justification", ""
),
cultural_fit_concerns=scores.get("cultural_fit", {}).get("concerns", ""),
# Агрегированные поля # Агрегированные поля
overall_score=report.get('overall_score', 0), overall_score=report.get("overall_score", 0),
recommendation=RecommendationType(report.get('recommendation', 'reject')), recommendation=RecommendationType(report.get("recommendation", "reject")),
# Дополнительные поля # Дополнительные поля
strengths=report.get('strengths', []), strengths=report.get("strengths", []),
weaknesses=report.get('weaknesses', []), weaknesses=report.get("weaknesses", []),
red_flags=report.get('red_flags', []), red_flags=report.get("red_flags", []),
# Метрики интервью # Метрики интервью
dialogue_messages_count=report.get('analysis_context', {}).get('dialogue_messages_count', 0), dialogue_messages_count=report.get("analysis_context", {}).get(
"dialogue_messages_count", 0
),
# Дополнительная информация # Дополнительная информация
next_steps=report.get('next_steps', ''), next_steps=report.get("next_steps", ""),
questions_analysis=report.get('questions_analysis', []), questions_analysis=report.get("questions_analysis", []),
# Метаданные анализа # Метаданные анализа
analysis_method=report.get('analysis_method', 'openai_gpt4'), analysis_method=report.get("analysis_method", "openai_gpt4"),
) )
def _update_report_from_dict(existing_report, report: Dict): def _update_report_from_dict(existing_report, report: dict):
"""Обновляет существующий отчет данными из словаря""" """Обновляет существующий отчет данными из словаря"""
from app.models.interview_report import RecommendationType from app.models.interview_report import RecommendationType
scores = report.get('scores', {}) scores = report.get("scores", {})
# Основные критерии оценки # Основные критерии оценки
if 'technical_skills' in scores: if "technical_skills" in scores:
existing_report.technical_skills_score = scores['technical_skills'].get('score', 0) existing_report.technical_skills_score = scores["technical_skills"].get(
existing_report.technical_skills_justification = scores['technical_skills'].get('justification', '') "score", 0
existing_report.technical_skills_concerns = scores['technical_skills'].get('concerns', '') )
existing_report.technical_skills_justification = scores["technical_skills"].get(
if 'experience_relevance' in scores: "justification", ""
existing_report.experience_relevance_score = scores['experience_relevance'].get('score', 0) )
existing_report.experience_relevance_justification = scores['experience_relevance'].get('justification', '') existing_report.technical_skills_concerns = scores["technical_skills"].get(
existing_report.experience_relevance_concerns = scores['experience_relevance'].get('concerns', '') "concerns", ""
)
if 'communication' in scores:
existing_report.communication_score = scores['communication'].get('score', 0) if "experience_relevance" in scores:
existing_report.communication_justification = scores['communication'].get('justification', '') existing_report.experience_relevance_score = scores["experience_relevance"].get(
existing_report.communication_concerns = scores['communication'].get('concerns', '') "score", 0
)
if 'problem_solving' in scores: existing_report.experience_relevance_justification = scores[
existing_report.problem_solving_score = scores['problem_solving'].get('score', 0) "experience_relevance"
existing_report.problem_solving_justification = scores['problem_solving'].get('justification', '') ].get("justification", "")
existing_report.problem_solving_concerns = scores['problem_solving'].get('concerns', '') existing_report.experience_relevance_concerns = scores[
"experience_relevance"
if 'cultural_fit' in scores: ].get("concerns", "")
existing_report.cultural_fit_score = scores['cultural_fit'].get('score', 0)
existing_report.cultural_fit_justification = scores['cultural_fit'].get('justification', '') if "communication" in scores:
existing_report.cultural_fit_concerns = scores['cultural_fit'].get('concerns', '') existing_report.communication_score = scores["communication"].get("score", 0)
existing_report.communication_justification = scores["communication"].get(
"justification", ""
)
existing_report.communication_concerns = scores["communication"].get(
"concerns", ""
)
if "problem_solving" in scores:
existing_report.problem_solving_score = scores["problem_solving"].get(
"score", 0
)
existing_report.problem_solving_justification = scores["problem_solving"].get(
"justification", ""
)
existing_report.problem_solving_concerns = scores["problem_solving"].get(
"concerns", ""
)
if "cultural_fit" in scores:
existing_report.cultural_fit_score = scores["cultural_fit"].get("score", 0)
existing_report.cultural_fit_justification = scores["cultural_fit"].get(
"justification", ""
)
existing_report.cultural_fit_concerns = scores["cultural_fit"].get(
"concerns", ""
)
# Агрегированные поля # Агрегированные поля
if 'overall_score' in report: if "overall_score" in report:
existing_report.overall_score = report['overall_score'] existing_report.overall_score = report["overall_score"]
if 'recommendation' in report: if "recommendation" in report:
existing_report.recommendation = RecommendationType(report['recommendation']) existing_report.recommendation = RecommendationType(report["recommendation"])
# Дополнительные поля # Дополнительные поля
if 'strengths' in report: if "strengths" in report:
existing_report.strengths = report['strengths'] existing_report.strengths = report["strengths"]
if 'weaknesses' in report: if "weaknesses" in report:
existing_report.weaknesses = report['weaknesses'] existing_report.weaknesses = report["weaknesses"]
if 'red_flags' in report: if "red_flags" in report:
existing_report.red_flags = report['red_flags'] existing_report.red_flags = report["red_flags"]
# Метрики интервью # Метрики интервью
if 'analysis_context' in report: if "analysis_context" in report:
existing_report.dialogue_messages_count = report['analysis_context'].get('dialogue_messages_count', 0) existing_report.dialogue_messages_count = report["analysis_context"].get(
"dialogue_messages_count", 0
)
# Дополнительная информация # Дополнительная информация
if 'next_steps' in report: if "next_steps" in report:
existing_report.next_steps = report['next_steps'] existing_report.next_steps = report["next_steps"]
if 'questions_analysis' in report: if "questions_analysis" in report:
existing_report.questions_analysis = report['questions_analysis'] existing_report.questions_analysis = report["questions_analysis"]
# Метаданные анализа # Метаданные анализа
if 'analysis_method' in report: if "analysis_method" in report:
existing_report.analysis_method = report['analysis_method'] existing_report.analysis_method = report["analysis_method"]
# Дополнительная задача для массового анализа # Дополнительная задача для массового анализа
@shared_task @shared_task
def analyze_multiple_candidates(resume_ids: List[int]): def analyze_multiple_candidates(resume_ids: list[int]):
""" """
Анализирует несколько кандидатов и возвращает их рейтинг Анализирует несколько кандидатов и возвращает их рейтинг
Args: Args:
resume_ids: Список ID резюме для анализа resume_ids: Список ID резюме для анализа
Returns: Returns:
List[Dict]: Список кандидатов с оценками, отсортированный по рейтингу List[Dict]: Список кандидатов с оценками, отсортированный по рейтингу
""" """
logger.info(f"[MASS_ANALYSIS] Starting analysis for {len(resume_ids)} candidates") logger.info(f"[MASS_ANALYSIS] Starting analysis for {len(resume_ids)} candidates")
results = [] results = []
for resume_id in resume_ids: for resume_id in resume_ids:
try: try:
result = generate_interview_report(resume_id) result = generate_interview_report(resume_id)
if 'error' not in result: if "error" not in result:
results.append({ results.append(
'resume_id': resume_id, {
'candidate_name': result.get('candidate_name', 'Unknown'), "resume_id": resume_id,
'overall_score': result.get('overall_score', 0), "candidate_name": result.get("candidate_name", "Unknown"),
'recommendation': result.get('recommendation', 'reject'), "overall_score": result.get("overall_score", 0),
'position': result.get('position', 'Unknown') "recommendation": result.get("recommendation", "reject"),
}) "position": result.get("position", "Unknown"),
}
)
except Exception as e: except Exception as e:
logger.error(f"[MASS_ANALYSIS] Error analyzing resume {resume_id}: {str(e)}") logger.error(
f"[MASS_ANALYSIS] Error analyzing resume {resume_id}: {str(e)}"
)
# Сортируем по общему баллу # Сортируем по общему баллу
results.sort(key=lambda x: x['overall_score'], reverse=True) results.sort(key=lambda x: x["overall_score"], reverse=True)
logger.info(f"[MASS_ANALYSIS] Completed analysis for {len(results)} candidates") logger.info(f"[MASS_ANALYSIS] Completed analysis for {len(results)} candidates")
return results return results

View File

@ -1,9 +1,7 @@
import asyncio import psutil
from celery import current_task
from celery_worker.celery_app import celery_app from celery_worker.celery_app import celery_app
from celery_worker.database import get_sync_session from celery_worker.database import get_sync_session
from app.services.interview_service import InterviewRoomService
import psutil
@celery_app.task(bind=True) @celery_app.task(bind=True)
@ -13,86 +11,94 @@ def cleanup_interview_processes_task(self):
""" """
try: try:
self.update_state( self.update_state(
state='PROGRESS', state="PROGRESS",
meta={'status': 'Checking for dead AI processes...', 'progress': 10} meta={"status": "Checking for dead AI processes...", "progress": 10},
) )
# Используем синхронный подход для Celery # Используем синхронный подход для Celery
with get_sync_session() as session: with get_sync_session() as session:
# Получаем все "активные" сессии из БД # Получаем все "активные" сессии из БД
from app.models.interview import InterviewSession from app.models.interview import InterviewSession
active_sessions = session.query(InterviewSession).filter(
InterviewSession.ai_agent_status == "running" active_sessions = (
).all() session.query(InterviewSession)
.filter(InterviewSession.ai_agent_status == "running")
.all()
)
cleaned_count = 0 cleaned_count = 0
total_sessions = len(active_sessions) total_sessions = len(active_sessions)
self.update_state( self.update_state(
state='PROGRESS', state="PROGRESS",
meta={'status': f'Found {total_sessions} potentially active sessions...', 'progress': 30} meta={
"status": f"Found {total_sessions} potentially active sessions...",
"progress": 30,
},
) )
for i, interview_session in enumerate(active_sessions): for i, interview_session in enumerate(active_sessions):
if interview_session.ai_agent_pid: if interview_session.ai_agent_pid:
try: try:
# Проверяем, жив ли процесс # Проверяем, жив ли процесс
process = psutil.Process(interview_session.ai_agent_pid) process = psutil.Process(interview_session.ai_agent_pid)
if not process.is_running(): if not process.is_running():
# Процесс мертв, обновляем статус # Процесс мертв, обновляем статус
interview_session.ai_agent_pid = None interview_session.ai_agent_pid = None
interview_session.ai_agent_status = "stopped" interview_session.ai_agent_status = "stopped"
session.add(interview_session) session.add(interview_session)
cleaned_count += 1 cleaned_count += 1
except psutil.NoSuchProcess: except psutil.NoSuchProcess:
# Процесс не существует # Процесс не существует
interview_session.ai_agent_pid = None interview_session.ai_agent_pid = None
interview_session.ai_agent_status = "stopped" interview_session.ai_agent_status = "stopped"
session.add(interview_session) session.add(interview_session)
cleaned_count += 1 cleaned_count += 1
except Exception as e: except Exception as e:
print(f"Error checking process {interview_session.ai_agent_pid}: {str(e)}") print(
f"Error checking process {interview_session.ai_agent_pid}: {str(e)}"
)
# Обновляем прогресс # Обновляем прогресс
progress = 30 + (i + 1) / total_sessions * 60 progress = 30 + (i + 1) / total_sessions * 60
self.update_state( self.update_state(
state='PROGRESS', state="PROGRESS",
meta={ meta={
'status': f'Processed {i + 1}/{total_sessions} sessions...', "status": f"Processed {i + 1}/{total_sessions} sessions...",
'progress': progress "progress": progress,
} },
) )
# Сохраняем изменения # Сохраняем изменения
session.commit() session.commit()
self.update_state( self.update_state(
state='SUCCESS', state="SUCCESS",
meta={ meta={
'status': f'Cleanup completed. Cleaned {cleaned_count} dead processes.', "status": f"Cleanup completed. Cleaned {cleaned_count} dead processes.",
'progress': 100, "progress": 100,
'cleaned_count': cleaned_count, "cleaned_count": cleaned_count,
'total_checked': total_sessions "total_checked": total_sessions,
} },
) )
return { return {
'status': 'completed', "status": "completed",
'cleaned_count': cleaned_count, "cleaned_count": cleaned_count,
'total_checked': total_sessions "total_checked": total_sessions,
} }
except Exception as e: except Exception as e:
self.update_state( self.update_state(
state='FAILURE', state="FAILURE",
meta={ meta={
'status': f'Error during cleanup: {str(e)}', "status": f"Error during cleanup: {str(e)}",
'progress': 0, "progress": 0,
'error': str(e) "error": str(e),
} },
) )
raise raise
@ -104,87 +110,93 @@ def force_kill_interview_process_task(self, session_id: int):
""" """
try: try:
self.update_state( self.update_state(
state='PROGRESS', state="PROGRESS",
meta={'status': f'Looking for session {session_id}...', 'progress': 20} meta={"status": f"Looking for session {session_id}...", "progress": 20},
) )
with get_sync_session() as session: with get_sync_session() as session:
from app.models.interview import InterviewSession from app.models.interview import InterviewSession
interview_session = session.query(InterviewSession).filter( interview_session = (
InterviewSession.id == session_id session.query(InterviewSession)
).first() .filter(InterviewSession.id == session_id)
.first()
)
if not interview_session: if not interview_session:
return { return {
'status': 'not_found', "status": "not_found",
'message': f'Session {session_id} not found' "message": f"Session {session_id} not found",
} }
if not interview_session.ai_agent_pid: if not interview_session.ai_agent_pid:
return { return {
'status': 'no_process', "status": "no_process",
'message': f'No AI process found for session {session_id}' "message": f"No AI process found for session {session_id}",
} }
self.update_state( self.update_state(
state='PROGRESS', state="PROGRESS",
meta={'status': f'Terminating process {interview_session.ai_agent_pid}...', 'progress': 50} meta={
"status": f"Terminating process {interview_session.ai_agent_pid}...",
"progress": 50,
},
) )
try: try:
process = psutil.Process(interview_session.ai_agent_pid) process = psutil.Process(interview_session.ai_agent_pid)
# Graceful terminate # Graceful terminate
process.terminate() process.terminate()
# Ждем до 5 секунд # Ждем до 5 секунд
import time import time
for _ in range(50): for _ in range(50):
if not process.is_running(): if not process.is_running():
break break
time.sleep(0.1) time.sleep(0.1)
# Если не помогло, убиваем принудительно # Если не помогло, убиваем принудительно
if process.is_running(): if process.is_running():
process.kill() process.kill()
time.sleep(0.5) # Даем время на завершение time.sleep(0.5) # Даем время на завершение
# Обновляем статус в БД # Обновляем статус в БД
interview_session.ai_agent_pid = None interview_session.ai_agent_pid = None
interview_session.ai_agent_status = "stopped" interview_session.ai_agent_status = "stopped"
session.add(interview_session) session.add(interview_session)
session.commit() session.commit()
self.update_state( self.update_state(
state='SUCCESS', state="SUCCESS",
meta={'status': 'Process terminated successfully', 'progress': 100} meta={"status": "Process terminated successfully", "progress": 100},
) )
return { return {
'status': 'terminated', "status": "terminated",
'message': f'AI process for session {session_id} terminated successfully' "message": f"AI process for session {session_id} terminated successfully",
} }
except psutil.NoSuchProcess: except psutil.NoSuchProcess:
# Процесс уже не существует # Процесс уже не существует
interview_session.ai_agent_pid = None interview_session.ai_agent_pid = None
interview_session.ai_agent_status = "stopped" interview_session.ai_agent_status = "stopped"
session.add(interview_session) session.add(interview_session)
session.commit() session.commit()
return { return {
'status': 'already_dead', "status": "already_dead",
'message': f'Process was already dead, cleaned up database' "message": "Process was already dead, cleaned up database",
} }
except Exception as e: except Exception as e:
self.update_state( self.update_state(
state='FAILURE', state="FAILURE",
meta={ meta={
'status': f'Error terminating process: {str(e)}', "status": f"Error terminating process: {str(e)}",
'progress': 0, "progress": 0,
'error': str(e) "error": str(e),
} },
) )
raise raise

View File

@ -1,55 +1,54 @@
import os
import json import json
from typing import Dict, Any import os
from celery import current_task from typing import Any
from datetime import datetime
from celery_worker.celery_app import celery_app from celery_worker.celery_app import celery_app
from celery_worker.database import get_sync_session, SyncResumeRepository from celery_worker.database import SyncResumeRepository, get_sync_session
from rag.llm.model import ResumeParser from rag.llm.model import ResumeParser
from rag.registry import registry from rag.registry import registry
# Импортируем новые задачи анализа интервью # Импортируем новые задачи анализа интервью
from celery_worker.interview_analysis_task import generate_interview_report, analyze_multiple_candidates
def generate_interview_plan(resume_id: int, combined_data: Dict[str, Any]) -> Dict[str, Any]: def generate_interview_plan(
resume_id: int, combined_data: dict[str, Any]
) -> dict[str, Any]:
"""Генерирует план интервью на основе резюме и вакансии""" """Генерирует план интервью на основе резюме и вакансии"""
try: try:
# Получаем данные о вакансии из БД # Получаем данные о вакансии из БД
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
resume_record = repo.get_by_id(resume_id) resume_record = repo.get_by_id(resume_id)
if not resume_record: if not resume_record:
return None return None
# Здесь нужно получить данные вакансии # Здесь нужно получить данные вакансии
# Пока используем заглушку, потом добавим связь с vacancy # Пока используем заглушку, потом добавим связь с vacancy
vacancy_data = { vacancy_data = {
"title": "Python Developer", "title": "Python Developer",
"requirements": "Python, FastAPI, PostgreSQL, Docker", "requirements": "Python, FastAPI, PostgreSQL, Docker",
"company_name": "Tech Company", "company_name": "Tech Company",
"experience_level": "Middle" "experience_level": "Middle",
} }
# Генерируем план через LLM # Генерируем план через LLM
chat_model = registry.get_chat_model() chat_model = registry.get_chat_model()
plan_prompt = f""" plan_prompt = f"""
Создай детальный план интервью для кандидата на основе его резюме и требований вакансии. Создай детальный план интервью для кандидата на основе его резюме и требований вакансии.
РЕЗЮМЕ КАНДИДАТА: РЕЗЮМЕ КАНДИДАТА:
- Имя: {combined_data.get('name', 'Не указано')} - Имя: {combined_data.get("name", "Не указано")}
- Навыки: {', '.join(combined_data.get('skills', []))} - Навыки: {", ".join(combined_data.get("skills", []))}
- Опыт: {combined_data.get('total_years', 0)} лет - Опыт: {combined_data.get("total_years", 0)} лет
- Образование: {combined_data.get('education', 'Не указано')} - Образование: {combined_data.get("education", "Не указано")}
ВАКАНСИЯ: ВАКАНСИЯ:
- Позиция: {vacancy_data['title']} - Позиция: {vacancy_data["title"]}
- Требования: {vacancy_data['requirements']} - Требования: {vacancy_data["requirements"]}
- Компания: {vacancy_data['company_name']} - Компания: {vacancy_data["company_name"]}
- Уровень: {vacancy_data['experience_level']} - Уровень: {vacancy_data["experience_level"]}
Создай план интервью в формате JSON: Создай план интервью в формате JSON:
{{ {{
@ -89,28 +88,31 @@ def generate_interview_plan(resume_id: int, combined_data: Dict[str, Any]) -> Di
"personalization_notes": "Кандидат имеет хороший технический опыт" "personalization_notes": "Кандидат имеет хороший технический опыт"
}} }}
""" """
from langchain.schema import HumanMessage, SystemMessage from langchain.schema import HumanMessage, SystemMessage
messages = [ messages = [
SystemMessage(content="Ты HR эксперт по планированию интервью. Создавай структурированные планы."), SystemMessage(
HumanMessage(content=plan_prompt) content="Ты HR эксперт по планированию интервью. Создавай структурированные планы."
),
HumanMessage(content=plan_prompt),
] ]
response = chat_model.get_llm().invoke(messages) response = chat_model.get_llm().invoke(messages)
response_text = response.content.strip() response_text = response.content.strip()
# Парсим JSON ответ # Парсим JSON ответ
if response_text.startswith('{') and response_text.endswith('}'): if response_text.startswith("{") and response_text.endswith("}"):
return json.loads(response_text) return json.loads(response_text)
else: else:
# Ищем JSON в тексте # Ищем JSON в тексте
start = response_text.find('{') start = response_text.find("{")
end = response_text.rfind('}') + 1 end = response_text.rfind("}") + 1
if start != -1 and end > start: if start != -1 and end > start:
return json.loads(response_text[start:end]) return json.loads(response_text[start:end])
return None return None
except Exception as e: except Exception as e:
print(f"Ошибка генерации плана интервью: {str(e)}") print(f"Ошибка генерации плана интервью: {str(e)}")
return None return None
@ -120,24 +122,24 @@ def generate_interview_plan(resume_id: int, combined_data: Dict[str, Any]) -> Di
def parse_resume_task(self, resume_id: str, file_path: str): def parse_resume_task(self, resume_id: str, file_path: str):
""" """
Асинхронная задача парсинга резюме Асинхронная задача парсинга резюме
Args: Args:
resume_id: ID резюме resume_id: ID резюме
file_path: Путь к PDF файлу резюме file_path: Путь к PDF файлу резюме
""" """
try: try:
# Шаг 0: Обновляем статус в БД - начали парсинг # Шаг 0: Обновляем статус в БД - начали парсинг
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
repo.update_status(int(resume_id), 'parsing') repo.update_status(int(resume_id), "parsing")
# Обновляем статус задачи # Обновляем статус задачи
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Начинаем парсинг резюме...', 'progress': 10} meta={"status": "Начинаем парсинг резюме...", "progress": 10},
) )
# Инициализируем модели из registry # Инициализируем модели из registry
try: try:
chat_model = registry.get_chat_model() chat_model = registry.get_chat_model()
@ -147,108 +149,116 @@ def parse_resume_task(self, resume_id: str, file_path: str):
# Обновляем статус в БД - ошибка инициализации # Обновляем статус в БД - ошибка инициализации
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
repo.update_status(int(resume_id), 'failed', error_message=f"Ошибка инициализации моделей: {str(e)}") repo.update_status(
int(resume_id),
"failed",
error_message=f"Ошибка инициализации моделей: {str(e)}",
)
raise Exception(f"Ошибка инициализации моделей: {str(e)}") raise Exception(f"Ошибка инициализации моделей: {str(e)}")
# Шаг 1: Парсинг резюме # Шаг 1: Парсинг резюме
self.update_state( self.update_state(
state='PROGRESS', state="PROGRESS",
meta={'status': 'Извлекаем текст из PDF...', 'progress': 20} meta={"status": "Извлекаем текст из PDF...", "progress": 20},
) )
parser = ResumeParser(chat_model) parser = ResumeParser(chat_model)
if not os.path.exists(file_path): if not os.path.exists(file_path):
# Обновляем статус в БД - файл не найден # Обновляем статус в БД - файл не найден
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
repo.update_status(int(resume_id), 'failed', error_message=f"Файл не найден: {file_path}") repo.update_status(
int(resume_id),
"failed",
error_message=f"Файл не найден: {file_path}",
)
raise Exception(f"Файл не найден: {file_path}") raise Exception(f"Файл не найден: {file_path}")
parsed_resume = parser.parse_resume_from_file(file_path) parsed_resume = parser.parse_resume_from_file(file_path)
# Получаем оригинальные данные из формы # Получаем оригинальные данные из формы
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
resume_record = repo.get_by_id(int(resume_id)) resume_record = repo.get_by_id(int(resume_id))
if not resume_record: if not resume_record:
raise Exception(f"Резюме с ID {resume_id} не найдено в базе данных") raise Exception(f"Резюме с ID {resume_id} не найдено в базе данных")
# Извлекаем нужные данные пока сессия активна # Извлекаем нужные данные пока сессия активна
applicant_name = resume_record.applicant_name applicant_name = resume_record.applicant_name
applicant_email = resume_record.applicant_email applicant_email = resume_record.applicant_email
applicant_phone = resume_record.applicant_phone applicant_phone = resume_record.applicant_phone
# Создаем комбинированные данные: навыки и опыт из парсинга, контакты из формы # Создаем комбинированные данные: навыки и опыт из парсинга, контакты из формы
combined_data = parsed_resume.copy() combined_data = parsed_resume.copy()
combined_data['name'] = applicant_name combined_data["name"] = applicant_name
combined_data['email'] = applicant_email combined_data["email"] = applicant_email
combined_data['phone'] = applicant_phone or parsed_resume.get('phone', '') combined_data["phone"] = applicant_phone or parsed_resume.get("phone", "")
# Шаг 2: Векторизация и сохранение в Milvus # Шаг 2: Векторизация и сохранение в Milvus
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Сохраняем в векторную базу...', 'progress': 60} meta={"status": "Сохраняем в векторную базу...", "progress": 60},
) )
vector_store.add_candidate_profile(str(resume_id), combined_data) vector_store.add_candidate_profile(str(resume_id), combined_data)
# Шаг 3: Обновляем статус в PostgreSQL - успешно обработано # Шаг 3: Обновляем статус в PostgreSQL - успешно обработано
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Обновляем статус в базе данных...', 'progress': 85} meta={"status": "Обновляем статус в базе данных...", "progress": 85},
) )
# Шаг 4: Генерируем план интервью # Шаг 4: Генерируем план интервью
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Генерируем план интервью...', 'progress': 90} meta={"status": "Генерируем план интервью...", "progress": 90},
) )
interview_plan = generate_interview_plan(int(resume_id), combined_data) interview_plan = generate_interview_plan(int(resume_id), combined_data)
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
repo.update_status(int(resume_id), 'parsed', parsed_data=combined_data) repo.update_status(int(resume_id), "parsed", parsed_data=combined_data)
# Сохраняем план интервью # Сохраняем план интервью
if interview_plan: if interview_plan:
repo.update_interview_plan(int(resume_id), interview_plan) repo.update_interview_plan(int(resume_id), interview_plan)
# Завершено успешно # Завершено успешно
self.update_state( self.update_state(
state='SUCCESS', state="SUCCESS",
meta={ meta={
'status': 'Резюме успешно обработано и план интервью готов', "status": "Резюме успешно обработано и план интервью готов",
'progress': 100, "progress": 100,
'result': combined_data "result": combined_data,
} },
) )
return { return {
'resume_id': resume_id, "resume_id": resume_id,
'status': 'completed', "status": "completed",
'parsed_data': combined_data "parsed_data": combined_data,
} }
except Exception as e: except Exception as e:
# В случае ошибки # В случае ошибки
self.update_state( self.update_state(
state='FAILURE', state="FAILURE",
meta={ meta={
'status': f'Ошибка при обработке резюме: {str(e)}', "status": f"Ошибка при обработке резюме: {str(e)}",
'progress': 0, "progress": 0,
'error': str(e) "error": str(e),
} },
) )
# Обновляем статус в БД как failed # Обновляем статус в БД как failed
try: try:
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
repo.update_status(int(resume_id), 'failed', error_message=str(e)) repo.update_status(int(resume_id), "failed", error_message=str(e))
except Exception as db_error: except Exception as db_error:
print(f"Ошибка при обновлении статуса в БД: {str(db_error)}") print(f"Ошибка при обновлении статуса в БД: {str(db_error)}")
raise raise
@ -259,63 +269,65 @@ def parse_resume_task(self, resume_id: str, file_path: str):
def generate_interview_questions_task(self, resume_id: str, job_description: str): def generate_interview_questions_task(self, resume_id: str, job_description: str):
""" """
Генерация персонализированных вопросов для интервью на основе резюме и описания вакансии Генерация персонализированных вопросов для интервью на основе резюме и описания вакансии
Args: Args:
resume_id: ID резюме resume_id: ID резюме
job_description: Описание вакансии job_description: Описание вакансии
""" """
try: try:
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Начинаем генерацию вопросов...', 'progress': 10} meta={"status": "Начинаем генерацию вопросов...", "progress": 10},
) )
# Инициализируем модели # Инициализируем модели
try: try:
chat_model = registry.get_chat_model() chat_model = registry.get_chat_model()
vector_store = registry.get_vector_store() vector_store = registry.get_vector_store()
except Exception as e: except Exception as e:
raise Exception(f"Ошибка инициализации моделей: {str(e)}") raise Exception(f"Ошибка инициализации моделей: {str(e)}")
# Шаг 1: Получить parsed резюме из базы данных # Шаг 1: Получить parsed резюме из базы данных
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Получаем данные резюме...', 'progress': 20} meta={"status": "Получаем данные резюме...", "progress": 20},
) )
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
resume = repo.get_by_id(int(resume_id)) resume = repo.get_by_id(int(resume_id))
if not resume: if not resume:
raise Exception(f"Резюме с ID {resume_id} не найдено") raise Exception(f"Резюме с ID {resume_id} не найдено")
if not resume.parsed_data: if not resume.parsed_data:
raise Exception(f"Резюме {resume_id} еще не обработано") raise Exception(f"Резюме {resume_id} еще не обработано")
# Шаг 2: Получить похожие кандидатов из Milvus для анализа # Шаг 2: Получить похожие кандидатов из Milvus для анализа
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Анализируем профиль кандидата...', 'progress': 40} meta={"status": "Анализируем профиль кандидата...", "progress": 40},
) )
candidate_skills = " ".join(resume.parsed_data.get('skills', [])) candidate_skills = " ".join(resume.parsed_data.get("skills", []))
similar_candidates = vector_store.search_similar_candidates(candidate_skills, k=3) similar_candidates = vector_store.search_similar_candidates(
candidate_skills, k=3
)
# Шаг 3: Сгенерировать персонализированные вопросы через LLM # Шаг 3: Сгенерировать персонализированные вопросы через LLM
self.update_state( self.update_state(
state='PENDING', state="PENDING",
meta={'status': 'Генерируем вопросы для интервью...', 'progress': 70} meta={"status": "Генерируем вопросы для интервью...", "progress": 70},
) )
questions_prompt = f""" questions_prompt = f"""
Сгенерируй 10 персонализированных вопросов для интервью кандидата на основе его резюме и описания вакансии. Сгенерируй 10 персонализированных вопросов для интервью кандидата на основе его резюме и описания вакансии.
РЕЗЮМЕ КАНДИДАТА: РЕЗЮМЕ КАНДИДАТА:
Имя: {resume.parsed_data.get('name', 'Не указано')} Имя: {resume.parsed_data.get("name", "Не указано")}
Навыки: {', '.join(resume.parsed_data.get('skills', []))} Навыки: {", ".join(resume.parsed_data.get("skills", []))}
Опыт работы: {resume.parsed_data.get('total_years', 0)} лет Опыт работы: {resume.parsed_data.get("total_years", 0)} лет
Образование: {resume.parsed_data.get('education', 'Не указано')} Образование: {resume.parsed_data.get("education", "Не указано")}
ОПИСАНИЕ ВАКАНСИИ: ОПИСАНИЕ ВАКАНСИИ:
{job_description} {job_description}
@ -339,77 +351,84 @@ def generate_interview_questions_task(self, resume_id: str, job_description: str
] ]
}} }}
""" """
from langchain.schema import HumanMessage, SystemMessage from langchain.schema import HumanMessage, SystemMessage
messages = [ messages = [
SystemMessage(content="Ты эксперт по проведению технических интервью. Генерируй качественные, персонализированные вопросы."), SystemMessage(
HumanMessage(content=questions_prompt) content="Ты эксперт по проведению технических интервью. Генерируй качественные, персонализированные вопросы."
),
HumanMessage(content=questions_prompt),
] ]
response = chat_model.get_llm().invoke(messages) response = chat_model.get_llm().invoke(messages)
# Парсим ответ # Парсим ответ
import json import json
response_text = response.content.strip() response_text = response.content.strip()
# Извлекаем JSON из ответа # Извлекаем JSON из ответа
if response_text.startswith('{') and response_text.endswith('}'): if response_text.startswith("{") and response_text.endswith("}"):
questions_data = json.loads(response_text) questions_data = json.loads(response_text)
else: else:
# Ищем JSON внутри текста # Ищем JSON внутри текста
start = response_text.find('{') start = response_text.find("{")
end = response_text.rfind('}') + 1 end = response_text.rfind("}") + 1
if start != -1 and end > start: if start != -1 and end > start:
json_str = response_text[start:end] json_str = response_text[start:end]
questions_data = json.loads(json_str) questions_data = json.loads(json_str)
else: else:
raise ValueError("JSON не найден в ответе LLM") raise ValueError("JSON не найден в ответе LLM")
# Шаг 4: Сохранить вопросы в notes резюме (пока так, потом можно создать отдельную таблицу) # Шаг 4: Сохранить вопросы в notes резюме (пока так, потом можно создать отдельную таблицу)
self.update_state( self.update_state(
state='PENDING', state="PENDING", meta={"status": "Сохраняем вопросы...", "progress": 90}
meta={'status': 'Сохраняем вопросы...', 'progress': 90}
) )
with get_sync_session() as session: with get_sync_session() as session:
repo = SyncResumeRepository(session) repo = SyncResumeRepository(session)
resume = repo.get_by_id(int(resume_id)) resume = repo.get_by_id(int(resume_id))
if resume: if resume:
# Сохраняем вопросы в notes (временно) # Сохраняем вопросы в notes (временно)
existing_notes = resume.notes or "" existing_notes = resume.notes or ""
interview_questions = json.dumps(questions_data, ensure_ascii=False, indent=2) interview_questions = json.dumps(
resume.notes = f"{existing_notes}\n\nINTERVIEW QUESTIONS:\n{interview_questions}" questions_data, ensure_ascii=False, indent=2
)
resume.notes = (
f"{existing_notes}\n\nINTERVIEW QUESTIONS:\n{interview_questions}"
)
from datetime import datetime from datetime import datetime
resume.updated_at = datetime.utcnow() resume.updated_at = datetime.utcnow()
session.add(resume) session.add(resume)
# Завершено успешно # Завершено успешно
self.update_state( self.update_state(
state='SUCCESS', state="SUCCESS",
meta={ meta={
'status': 'Вопросы для интервью успешно сгенерированы', "status": "Вопросы для интервью успешно сгенерированы",
'progress': 100, "progress": 100,
'result': questions_data "result": questions_data,
} },
) )
return { return {
'resume_id': resume_id, "resume_id": resume_id,
'status': 'questions_generated', "status": "questions_generated",
'questions': questions_data['questions'] "questions": questions_data["questions"],
} }
except Exception as e: except Exception as e:
# В случае ошибки # В случае ошибки
self.update_state( self.update_state(
state='FAILURE', state="FAILURE",
meta={ meta={
'status': f'Ошибка при генерации вопросов: {str(e)}', "status": f"Ошибка при генерации вопросов: {str(e)}",
'progress': 0, "progress": 0,
'error': str(e) "error": str(e),
} },
) )
raise Exception(f"Ошибка при генерации вопросов: {str(e)}") raise Exception(f"Ошибка при генерации вопросов: {str(e)}")

32
main.py
View File

@ -1,24 +1,42 @@
from contextlib import asynccontextmanager
from fastapi import FastAPI from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from contextlib import asynccontextmanager
from app.core.session_middleware import SessionMiddleware from app.core.session_middleware import SessionMiddleware
from app.routers import vacancy_router, resume_router from app.routers import resume_router, vacancy_router
from app.routers.session_router import router as session_router
from app.routers.interview_router import router as interview_router
from app.routers.analysis_router import router as analysis_router
from app.routers.admin_router import router as admin_router from app.routers.admin_router import router as admin_router
from app.routers.analysis_router import router as analysis_router
from app.routers.interview_router import router as interview_router
from app.routers.session_router import router as session_router
@asynccontextmanager @asynccontextmanager
async def lifespan(app: FastAPI): async def lifespan(app: FastAPI):
# Запускаем AI агента при старте приложения
from app.services.agent_manager import agent_manager
print("[STARTUP] Starting AI Agent...")
success = await agent_manager.start_agent()
if success:
print("[STARTUP] AI Agent started successfully")
else:
print("[STARTUP] Failed to start AI Agent")
yield yield
# Останавливаем AI агента при завершении приложения
print("[SHUTDOWN] Stopping AI Agent...")
await agent_manager.stop_agent()
print("[SHUTDOWN] AI Agent stopped")
app = FastAPI( app = FastAPI(
title="HR AI Backend", title="HR AI Backend",
description="Backend API for HR AI system with vacancies and resumes management", description="Backend API for HR AI system with vacancies and resumes management",
version="1.0.0", version="1.0.0",
lifespan=lifespan lifespan=lifespan,
) )
app.add_middleware( app.add_middleware(
@ -47,4 +65,4 @@ async def root():
@app.get("/health") @app.get("/health")
async def health_check(): async def health_check():
return {"status": "healthy"} return {"status": "healthy"}

View File

@ -1,11 +1,9 @@
import asyncio import asyncio
from logging.config import fileConfig from logging.config import fileConfig
from sqlalchemy import engine_from_config from alembic import context
from sqlalchemy import pool from sqlalchemy import pool
from sqlalchemy.ext.asyncio import async_engine_from_config from sqlalchemy.ext.asyncio import async_engine_from_config
from alembic import context
from sqlmodel import SQLModel from sqlmodel import SQLModel
from app.core.config import settings from app.core.config import settings
@ -75,9 +73,7 @@ def run_migrations_online() -> None:
await connection.run_sync(do_run_migrations) await connection.run_sync(do_run_migrations)
def do_run_migrations(connection): def do_run_migrations(connection):
context.configure( context.configure(connection=connection, target_metadata=target_metadata)
connection=connection, target_metadata=target_metadata
)
with context.begin_transaction(): with context.begin_transaction():
context.run_migrations() context.run_migrations()

View File

@ -5,26 +5,26 @@ Revises: 4d04e6e32445
Create Date: 2025-09-02 23:38:36.541565 Create Date: 2025-09-02 23:38:36.541565
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '1a2cda4df181' revision: str = "1a2cda4df181"
down_revision: Union[str, Sequence[str], None] = '4d04e6e32445' down_revision: str | Sequence[str] | None = "4d04e6e32445"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Add interview_plan column to resume table # Add interview_plan column to resume table
op.add_column('resume', sa.Column('interview_plan', sa.JSON(), nullable=True)) op.add_column("resume", sa.Column("interview_plan", sa.JSON(), nullable=True))
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# Drop interview_plan column # Drop interview_plan column
op.drop_column('resume', 'interview_plan') op.drop_column("resume", "interview_plan")

View File

@ -5,28 +5,32 @@ Revises: 4723b138a3bb
Create Date: 2025-09-02 20:00:00.689080 Create Date: 2025-09-02 20:00:00.689080
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '385d03e3281c' revision: str = "385d03e3281c"
down_revision: Union[str, Sequence[str], None] = '4723b138a3bb' down_revision: str | Sequence[str] | None = "4723b138a3bb"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Create InterviewStatus enum type # Create InterviewStatus enum type
interview_status_enum = sa.Enum('created', 'active', 'completed', 'failed', name='interviewstatus') interview_status_enum = sa.Enum(
"created", "active", "completed", "failed", name="interviewstatus"
)
interview_status_enum.create(op.get_bind()) interview_status_enum.create(op.get_bind())
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# Drop InterviewStatus enum type # Drop InterviewStatus enum type
interview_status_enum = sa.Enum('created', 'active', 'completed', 'failed', name='interviewstatus') interview_status_enum = sa.Enum(
"created", "active", "completed", "failed", name="interviewstatus"
)
interview_status_enum.drop(op.get_bind()) interview_status_enum.drop(op.get_bind())

View File

@ -5,43 +5,48 @@ Revises: dba37152ae9a
Create Date: 2025-09-02 19:31:03.531702 Create Date: 2025-09-02 19:31:03.531702
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '4723b138a3bb' revision: str = "4723b138a3bb"
down_revision: Union[str, Sequence[str], None] = 'dba37152ae9a' down_revision: str | Sequence[str] | None = "dba37152ae9a"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.create_table( op.create_table(
'interview_sessions', "interview_sessions",
sa.Column('id', sa.Integer(), nullable=False), sa.Column("id", sa.Integer(), nullable=False),
sa.Column('resume_id', sa.Integer(), nullable=False), sa.Column("resume_id", sa.Integer(), nullable=False),
sa.Column('room_name', sa.String(length=255), nullable=False), sa.Column("room_name", sa.String(length=255), nullable=False),
sa.Column('status', sa.String(length=50), nullable=False), sa.Column("status", sa.String(length=50), nullable=False),
sa.Column('transcript', sa.Text(), nullable=True), sa.Column("transcript", sa.Text(), nullable=True),
sa.Column('ai_feedback', sa.Text(), nullable=True), sa.Column("ai_feedback", sa.Text(), nullable=True),
sa.Column('started_at', sa.DateTime(), nullable=False), sa.Column("started_at", sa.DateTime(), nullable=False),
sa.Column('completed_at', sa.DateTime(), nullable=True), sa.Column("completed_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['resume_id'], ['resume.id'], ), sa.ForeignKeyConstraint(
sa.PrimaryKeyConstraint('id'), ["resume_id"],
sa.UniqueConstraint('room_name') ["resume.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("room_name"),
)
op.create_index(
op.f("ix_interview_sessions_id"), "interview_sessions", ["id"], unique=False
) )
op.create_index(op.f('ix_interview_sessions_id'), 'interview_sessions', ['id'], unique=False)
# ### end Alembic commands ### # ### end Alembic commands ###
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_interview_sessions_id'), table_name='interview_sessions') op.drop_index(op.f("ix_interview_sessions_id"), table_name="interview_sessions")
op.drop_table('interview_sessions') op.drop_table("interview_sessions")
# ### end Alembic commands ### # ### end Alembic commands ###

View File

@ -5,59 +5,83 @@ Revises: 96ffcf34e1de
Create Date: 2025-09-02 20:10:52.321402 Create Date: 2025-09-02 20:10:52.321402
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import postgresql from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '4d04e6e32445' revision: str = "4d04e6e32445"
down_revision: Union[str, Sequence[str], None] = '96ffcf34e1de' down_revision: str | Sequence[str] | None = "96ffcf34e1de"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Recreate interview_sessions table with proper enum (enum already exists) # Recreate interview_sessions table with proper enum (enum already exists)
op.drop_index(op.f('ix_interview_sessions_id'), table_name='interview_sessions') op.drop_index(op.f("ix_interview_sessions_id"), table_name="interview_sessions")
op.drop_table('interview_sessions') op.drop_table("interview_sessions")
# Create table with existing enum type # Create table with existing enum type
op.create_table('interview_sessions', op.create_table(
sa.Column('id', sa.Integer(), nullable=False), "interview_sessions",
sa.Column('resume_id', sa.Integer(), nullable=False), sa.Column("id", sa.Integer(), nullable=False),
sa.Column('room_name', sa.String(length=255), nullable=False), sa.Column("resume_id", sa.Integer(), nullable=False),
sa.Column('status', postgresql.ENUM('created', 'active', 'completed', 'failed', name='interviewstatus', create_type=False), nullable=False), sa.Column("room_name", sa.String(length=255), nullable=False),
sa.Column('transcript', sa.Text(), nullable=True), sa.Column(
sa.Column('ai_feedback', sa.Text(), nullable=True), "status",
sa.Column('started_at', sa.DateTime(), nullable=False), postgresql.ENUM(
sa.Column('completed_at', sa.DateTime(), nullable=True), "created",
sa.ForeignKeyConstraint(['resume_id'], ['resume.id'], ), "active",
sa.PrimaryKeyConstraint('id'), "completed",
sa.UniqueConstraint('room_name') "failed",
name="interviewstatus",
create_type=False,
),
nullable=False,
),
sa.Column("transcript", sa.Text(), nullable=True),
sa.Column("ai_feedback", sa.Text(), nullable=True),
sa.Column("started_at", sa.DateTime(), nullable=False),
sa.Column("completed_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(
["resume_id"],
["resume.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("room_name"),
)
op.create_index(
op.f("ix_interview_sessions_id"), "interview_sessions", ["id"], unique=False
) )
op.create_index(op.f('ix_interview_sessions_id'), 'interview_sessions', ['id'], unique=False)
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
op.drop_index(op.f('ix_interview_sessions_id'), table_name='interview_sessions') op.drop_index(op.f("ix_interview_sessions_id"), table_name="interview_sessions")
op.drop_table('interview_sessions') op.drop_table("interview_sessions")
# Recreate old table structure # Recreate old table structure
op.create_table('interview_sessions', op.create_table(
sa.Column('id', sa.Integer(), nullable=False), "interview_sessions",
sa.Column('resume_id', sa.Integer(), nullable=False), sa.Column("id", sa.Integer(), nullable=False),
sa.Column('room_name', sa.String(length=255), nullable=False), sa.Column("resume_id", sa.Integer(), nullable=False),
sa.Column('status', sa.String(length=50), nullable=False), sa.Column("room_name", sa.String(length=255), nullable=False),
sa.Column('transcript', sa.Text(), nullable=True), sa.Column("status", sa.String(length=50), nullable=False),
sa.Column('ai_feedback', sa.Text(), nullable=True), sa.Column("transcript", sa.Text(), nullable=True),
sa.Column('started_at', sa.DateTime(), nullable=False), sa.Column("ai_feedback", sa.Text(), nullable=True),
sa.Column('completed_at', sa.DateTime(), nullable=True), sa.Column("started_at", sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(['resume_id'], ['resume.id'], ), sa.Column("completed_at", sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id'), sa.ForeignKeyConstraint(
sa.UniqueConstraint('room_name') ["resume_id"],
["resume.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("room_name"),
)
op.create_index(
op.f("ix_interview_sessions_id"), "interview_sessions", ["id"], unique=False
) )
op.create_index(op.f('ix_interview_sessions_id'), 'interview_sessions', ['id'], unique=False)

View File

@ -5,25 +5,25 @@ Revises: ae966b3e742e
Create Date: 2025-08-30 20:38:36.867781 Create Date: 2025-08-30 20:38:36.867781
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '4e19b8fe4a88' revision: str = "4e19b8fe4a88"
down_revision: Union[str, Sequence[str], None] = 'ae966b3e742e' down_revision: str | Sequence[str] | None = "ae966b3e742e"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
# Сначала добавляем колонку как nullable # Сначала добавляем колонку как nullable
op.add_column('resume', sa.Column('session_id', sa.Integer(), nullable=True)) op.add_column("resume", sa.Column("session_id", sa.Integer(), nullable=True))
# Создаем временную сессию для существующих резюме (если есть) # Создаем временную сессию для существующих резюме (если есть)
op.execute(""" op.execute("""
INSERT INTO session (session_id, is_active, expires_at, last_activity, created_at, updated_at) INSERT INTO session (session_id, is_active, expires_at, last_activity, created_at, updated_at)
@ -36,24 +36,24 @@ def upgrade() -> None:
NOW() NOW()
WHERE NOT EXISTS (SELECT 1 FROM session LIMIT 1) WHERE NOT EXISTS (SELECT 1 FROM session LIMIT 1)
""") """)
# Обновляем существующие резюме, привязывая их к первой сессии # Обновляем существующие резюме, привязывая их к первой сессии
op.execute(""" op.execute("""
UPDATE resume UPDATE resume
SET session_id = (SELECT id FROM session ORDER BY id LIMIT 1) SET session_id = (SELECT id FROM session ORDER BY id LIMIT 1)
WHERE session_id IS NULL WHERE session_id IS NULL
""") """)
# Теперь делаем колонку NOT NULL # Теперь делаем колонку NOT NULL
op.alter_column('resume', 'session_id', nullable=False) op.alter_column("resume", "session_id", nullable=False)
op.create_foreign_key(None, 'resume', 'session', ['session_id'], ['id']) op.create_foreign_key(None, "resume", "session", ["session_id"], ["id"])
# ### end Alembic commands ### # ### end Alembic commands ###
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, 'resume', type_='foreignkey') op.drop_constraint(None, "resume", type_="foreignkey")
op.drop_column('resume', 'session_id') op.drop_column("resume", "session_id")
# ### end Alembic commands ### # ### end Alembic commands ###

View File

@ -5,17 +5,16 @@ Revises: a816820baadb
Create Date: 2025-09-04 00:02:15.230498 Create Date: 2025-09-04 00:02:15.230498
""" """
from typing import Sequence, Union
from collections.abc import Sequence
from alembic import op from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '772538626a9e' revision: str = "772538626a9e"
down_revision: Union[str, Sequence[str], None] = 'a816820baadb' down_revision: str | Sequence[str] | None = "a816820baadb"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
@ -26,7 +25,7 @@ def upgrade() -> None:
ALTER COLUMN parsed_data TYPE JSON USING parsed_data::JSON, ALTER COLUMN parsed_data TYPE JSON USING parsed_data::JSON,
ALTER COLUMN interview_plan TYPE JSON USING interview_plan::JSON ALTER COLUMN interview_plan TYPE JSON USING interview_plan::JSON
""") """)
op.execute(""" op.execute("""
ALTER TABLE interview_sessions ALTER TABLE interview_sessions
ALTER COLUMN dialogue_history TYPE JSON USING dialogue_history::JSON ALTER COLUMN dialogue_history TYPE JSON USING dialogue_history::JSON
@ -41,7 +40,7 @@ def downgrade() -> None:
ALTER COLUMN parsed_data TYPE TEXT USING parsed_data::TEXT, ALTER COLUMN parsed_data TYPE TEXT USING parsed_data::TEXT,
ALTER COLUMN interview_plan TYPE TEXT USING interview_plan::TEXT ALTER COLUMN interview_plan TYPE TEXT USING interview_plan::TEXT
""") """)
op.execute(""" op.execute("""
ALTER TABLE interview_sessions ALTER TABLE interview_sessions
ALTER COLUMN dialogue_history TYPE TEXT USING dialogue_history::TEXT ALTER COLUMN dialogue_history TYPE TEXT USING dialogue_history::TEXT

View File

@ -5,27 +5,26 @@ Revises: a694f7c9e766
Create Date: 2025-08-30 20:00:00.661534 Create Date: 2025-08-30 20:00:00.661534
""" """
from typing import Sequence, Union
from collections.abc import Sequence
from alembic import op from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '7ffa784ab042' revision: str = "7ffa784ab042"
down_revision: Union[str, Sequence[str], None] = 'a694f7c9e766' down_revision: str | Sequence[str] | None = "a694f7c9e766"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Add sample vacancies.""" """Add sample vacancies."""
# Create sample vacancies data # Create sample vacancies data
vacancies_data = [ vacancies_data = [
{ {
'title': 'Senior Python Developer', "title": "Senior Python Developer",
'description': '''Мы ищем опытного Python-разработчика для работы в команде разработки высоконагруженного веб-сервиса. "description": """Мы ищем опытного Python-разработчика для работы в команде разработки высоконагруженного веб-сервиса.
Обязанности: Обязанности:
Разработка и поддержка API на Python (FastAPI/Django) Разработка и поддержка API на Python (FastAPI/Django)
@ -45,31 +44,31 @@ def upgrade() -> None:
Будет плюсом: Будет плюсом:
Опыт работы с облачными сервисами (AWS/GCP) Опыт работы с облачными сервисами (AWS/GCP)
Знание Go или Node.js Знание Go или Node.js
Опыт ведения технических интервью''', Опыт ведения технических интервью""",
'key_skills': 'Python, FastAPI, Django, PostgreSQL, Redis, Docker, Kubernetes, Микросервисы, REST API, Git', "key_skills": "Python, FastAPI, Django, PostgreSQL, Redis, Docker, Kubernetes, Микросервисы, REST API, Git",
'employment_type': 'FULL_TIME', "employment_type": "FULL_TIME",
'experience': 'MORE_THAN_6', "experience": "MORE_THAN_6",
'schedule': 'REMOTE', "schedule": "REMOTE",
'salary_from': 250000, "salary_from": 250000,
'salary_to': 400000, "salary_to": 400000,
'salary_currency': 'RUR', "salary_currency": "RUR",
'gross_salary': False, "gross_salary": False,
'company_name': 'TechCorp Solutions', "company_name": "TechCorp Solutions",
'company_description': 'Компания-разработчик инновационных решений в области fintech. У нас работает более 500 специалистов, офисы в Москве и Санкт-Петербурге.', "company_description": "Компания-разработчик инновационных решений в области fintech. У нас работает более 500 специалистов, офисы в Москве и Санкт-Петербурге.",
'area_name': 'Москва', "area_name": "Москва",
'metro_stations': 'Сокольники, Красносельская', "metro_stations": "Сокольники, Красносельская",
'address': 'г. Москва, ул. Русаковская, д. 13', "address": "г. Москва, ул. Русаковская, д. 13",
'professional_roles': 'Программист, разработчик', "professional_roles": "Программист, разработчик",
'contacts_name': 'Анна Петрова', "contacts_name": "Анна Петрова",
'contacts_email': 'hr@techcorp.ru', "contacts_email": "hr@techcorp.ru",
'contacts_phone': '+7 (495) 123-45-67', "contacts_phone": "+7 (495) 123-45-67",
'is_archived': False, "is_archived": False,
'premium': True, "premium": True,
'url': 'https://techcorp.ru/careers/senior-python' "url": "https://techcorp.ru/careers/senior-python",
}, },
{ {
'title': 'Frontend React Developer', "title": "Frontend React Developer",
'description': '''Приглашаем талантливого фронтенд-разработчика для создания современных веб-приложений. "description": """Приглашаем талантливого фронтенд-разработчика для создания современных веб-приложений.
Задачи: Задачи:
Разработка пользовательских интерфейсов на React Разработка пользовательских интерфейсов на React
@ -91,31 +90,31 @@ def upgrade() -> None:
Гибкий график работы Гибкий график работы
Медицинское страхование Медицинское страхование
Обучение за счет компании Обучение за счет компании
Дружная команда профессионалов''', Дружная команда профессионалов""",
'key_skills': 'React, TypeScript, JavaScript, HTML5, CSS3, SASS, Redux, Webpack, Git, REST API', "key_skills": "React, TypeScript, JavaScript, HTML5, CSS3, SASS, Redux, Webpack, Git, REST API",
'employment_type': 'FULL_TIME', "employment_type": "FULL_TIME",
'experience': 'BETWEEN_3_AND_6', "experience": "BETWEEN_3_AND_6",
'schedule': 'FLEXIBLE', "schedule": "FLEXIBLE",
'salary_from': 150000, "salary_from": 150000,
'salary_to': 250000, "salary_to": 250000,
'salary_currency': 'RUR', "salary_currency": "RUR",
'gross_salary': False, "gross_salary": False,
'company_name': 'Digital Agency Pro', "company_name": "Digital Agency Pro",
'company_description': 'Креативное digital-агентство, специализирующееся на разработке веб-приложений и мобильных решений для крупных брендов.', "company_description": "Креативное digital-агентство, специализирующееся на разработке веб-приложений и мобильных решений для крупных брендов.",
'area_name': 'Санкт-Петербург', "area_name": "Санкт-Петербург",
'metro_stations': 'Технологический институт, Пушкинская', "metro_stations": "Технологический институт, Пушкинская",
'address': 'г. Санкт-Петербург, ул. Правды, д. 10', "address": "г. Санкт-Петербург, ул. Правды, д. 10",
'professional_roles': 'Программист, разработчик', "professional_roles": "Программист, разработчик",
'contacts_name': 'Михаил Сидоров', "contacts_name": "Михаил Сидоров",
'contacts_email': 'jobs@digitalagency.ru', "contacts_email": "jobs@digitalagency.ru",
'contacts_phone': '+7 (812) 987-65-43', "contacts_phone": "+7 (812) 987-65-43",
'is_archived': False, "is_archived": False,
'premium': False, "premium": False,
'url': 'https://digitalagency.ru/vacancy/react-dev' "url": "https://digitalagency.ru/vacancy/react-dev",
}, },
{ {
'title': 'DevOps Engineer', "title": "DevOps Engineer",
'description': '''Ищем DevOps-инженера для автоматизации процессов CI/CD и управления облачной инфраструктурой. "description": """Ищем DevOps-инженера для автоматизации процессов CI/CD и управления облачной инфраструктурой.
Основные задачи: Основные задачи:
Проектирование и поддержка CI/CD pipeline Проектирование и поддержка CI/CD pipeline
@ -138,31 +137,31 @@ def upgrade() -> None:
Официальное трудоустройство Официальное трудоустройство
Компенсация обучения и сертификации Компенсация обучения и сертификации
Современное оборудование Современное оборудование
Возможность работы из дома''', Возможность работы из дома""",
'key_skills': 'Docker, Kubernetes, AWS, Terraform, Ansible, Jenkins, GitLab CI/CD, Prometheus, Grafana, Linux', "key_skills": "Docker, Kubernetes, AWS, Terraform, Ansible, Jenkins, GitLab CI/CD, Prometheus, Grafana, Linux",
'employment_type': 'FULL_TIME', "employment_type": "FULL_TIME",
'experience': 'BETWEEN_3_AND_6', "experience": "BETWEEN_3_AND_6",
'schedule': 'REMOTE', "schedule": "REMOTE",
'salary_from': 200000, "salary_from": 200000,
'salary_to': 350000, "salary_to": 350000,
'salary_currency': 'RUR', "salary_currency": "RUR",
'gross_salary': False, "gross_salary": False,
'company_name': 'CloudTech Systems', "company_name": "CloudTech Systems",
'company_description': 'Системный интегратор, специализирующийся на внедрении облачных решений и автоматизации IT-процессов для корпоративных клиентов.', "company_description": "Системный интегратор, специализирующийся на внедрении облачных решений и автоматизации IT-процессов для корпоративных клиентов.",
'area_name': 'Москва', "area_name": "Москва",
'metro_stations': 'Белорусская, Маяковская', "metro_stations": "Белорусская, Маяковская",
'address': 'г. Москва, Тверская ул., д. 25', "address": "г. Москва, Тверская ул., д. 25",
'professional_roles': 'Системный администратор, DevOps', "professional_roles": "Системный администратор, DevOps",
'contacts_name': 'Елена Васильева', "contacts_name": "Елена Васильева",
'contacts_email': 'hr@cloudtech.ru', "contacts_email": "hr@cloudtech.ru",
'contacts_phone': '+7 (495) 555-12-34', "contacts_phone": "+7 (495) 555-12-34",
'is_archived': False, "is_archived": False,
'premium': True, "premium": True,
'url': 'https://cloudtech.ru/careers/devops' "url": "https://cloudtech.ru/careers/devops",
}, },
{ {
'title': 'Junior Java Developer', "title": "Junior Java Developer",
'description': '''Приглашаем начинающего Java-разработчика для участия в крупных enterprise-проектах. "description": """Приглашаем начинающего Java-разработчика для участия в крупных enterprise-проектах.
Обязанности: Обязанности:
Разработка backend-сервисов на Java Разработка backend-сервисов на Java
@ -185,31 +184,31 @@ def upgrade() -> None:
Карьерный рост Карьерный рост
Стабильную зарплату Стабильную зарплату
Молодая и амбициозная команда Молодая и амбициозная команда
Интересные проекты в финтех сфере''', Интересные проекты в финтех сфере""",
'key_skills': 'Java, Spring Framework, SQL, Git, REST API, JUnit, Maven, PostgreSQL', "key_skills": "Java, Spring Framework, SQL, Git, REST API, JUnit, Maven, PostgreSQL",
'employment_type': 'FULL_TIME', "employment_type": "FULL_TIME",
'experience': 'BETWEEN_1_AND_3', "experience": "BETWEEN_1_AND_3",
'schedule': 'FULL_DAY', "schedule": "FULL_DAY",
'salary_from': 80000, "salary_from": 80000,
'salary_to': 120000, "salary_to": 120000,
'salary_currency': 'RUR', "salary_currency": "RUR",
'gross_salary': False, "gross_salary": False,
'company_name': 'FinTech Innovations', "company_name": "FinTech Innovations",
'company_description': 'Быстро развивающийся стартап в области финансовых технологий. Создаем инновационные решения для банков и финансовых институтов.', "company_description": "Быстро развивающийся стартап в области финансовых технологий. Создаем инновационные решения для банков и финансовых институтов.",
'area_name': 'Екатеринбург', "area_name": "Екатеринбург",
'metro_stations': 'Площадь 1905 года, Динамо', "metro_stations": "Площадь 1905 года, Динамо",
'address': 'г. Екатеринбург, ул. Ленина, д. 33', "address": "г. Екатеринбург, ул. Ленина, д. 33",
'professional_roles': 'Программист, разработчик', "professional_roles": "Программист, разработчик",
'contacts_name': 'Дмитрий Козлов', "contacts_name": "Дмитрий Козлов",
'contacts_email': 'recruitment@fintech-inn.ru', "contacts_email": "recruitment@fintech-inn.ru",
'contacts_phone': '+7 (343) 456-78-90', "contacts_phone": "+7 (343) 456-78-90",
'is_archived': False, "is_archived": False,
'premium': False, "premium": False,
'url': 'https://fintech-inn.ru/jobs/java-junior' "url": "https://fintech-inn.ru/jobs/java-junior",
}, },
{ {
'title': 'Product Manager IT', "title": "Product Manager IT",
'description': '''Ищем опытного продуктового менеджера для управления развитием digital-продуктов. "description": """Ищем опытного продуктового менеджера для управления развитием digital-продуктов.
Основные задачи: Основные задачи:
Управление продуктовой стратегией и roadmap Управление продуктовой стратегией и roadmap
@ -234,30 +233,30 @@ def upgrade() -> None:
Работу с топ-менеджментом компании Работу с топ-менеджментом компании
Современные инструменты и методики Современные инструменты и методики
Конкурентную заработную плату Конкурентную заработную плату
Полный соц. пакет и ДМС''', Полный соц. пакет и ДМС""",
'key_skills': 'Product Management, Agile, Scrum, Аналитика, UX/UI, Jira, A/B тестирование, User Research', "key_skills": "Product Management, Agile, Scrum, Аналитика, UX/UI, Jira, A/B тестирование, User Research",
'employment_type': 'FULL_TIME', "employment_type": "FULL_TIME",
'experience': 'BETWEEN_3_AND_6', "experience": "BETWEEN_3_AND_6",
'schedule': 'FLEXIBLE', "schedule": "FLEXIBLE",
'salary_from': 180000, "salary_from": 180000,
'salary_to': 280000, "salary_to": 280000,
'salary_currency': 'RUR', "salary_currency": "RUR",
'gross_salary': False, "gross_salary": False,
'company_name': 'Marketplace Solutions', "company_name": "Marketplace Solutions",
'company_description': 'Один из лидеров российского e-commerce рынка. Развиваем крупнейшую онлайн-платформу с миллионами пользователей.', "company_description": "Один из лидеров российского e-commerce рынка. Развиваем крупнейшую онлайн-платформу с миллионами пользователей.",
'area_name': 'Москва', "area_name": "Москва",
'metro_stations': 'Парк культуры, Сокольники', "metro_stations": "Парк культуры, Сокольники",
'address': 'г. Москва, Садовая-Триумфальная ул., д. 4/10', "address": "г. Москва, Садовая-Триумфальная ул., д. 4/10",
'professional_roles': 'Менеджер продукта, Product Manager', "professional_roles": "Менеджер продукта, Product Manager",
'contacts_name': 'Ольга Смирнова', "contacts_name": "Ольга Смирнова",
'contacts_email': 'pm-jobs@marketplace.ru', "contacts_email": "pm-jobs@marketplace.ru",
'contacts_phone': '+7 (495) 777-88-99', "contacts_phone": "+7 (495) 777-88-99",
'is_archived': False, "is_archived": False,
'premium': True, "premium": True,
'url': 'https://marketplace.ru/career/product-manager' "url": "https://marketplace.ru/career/product-manager",
} },
] ]
# Insert vacancies using raw SQL with proper enum casting # Insert vacancies using raw SQL with proper enum casting
for vacancy_data in vacancies_data: for vacancy_data in vacancies_data:
op.execute(f""" op.execute(f"""
@ -268,29 +267,29 @@ def upgrade() -> None:
professional_roles, contacts_name, contacts_email, contacts_phone, professional_roles, contacts_name, contacts_email, contacts_phone,
is_archived, premium, published_at, url, created_at, updated_at is_archived, premium, published_at, url, created_at, updated_at
) VALUES ( ) VALUES (
'{vacancy_data['title']}', '{vacancy_data["title"]}',
'{vacancy_data['description'].replace("'", "''")}', '{vacancy_data["description"].replace("'", "''")}',
'{vacancy_data['key_skills']}', '{vacancy_data["key_skills"]}',
'{vacancy_data['employment_type']}'::employmenttype, '{vacancy_data["employment_type"]}'::employmenttype,
'{vacancy_data['experience']}'::experience, '{vacancy_data["experience"]}'::experience,
'{vacancy_data['schedule']}'::schedule, '{vacancy_data["schedule"]}'::schedule,
{vacancy_data['salary_from']}, {vacancy_data["salary_from"]},
{vacancy_data['salary_to']}, {vacancy_data["salary_to"]},
'{vacancy_data['salary_currency']}', '{vacancy_data["salary_currency"]}',
{vacancy_data['gross_salary']}, {vacancy_data["gross_salary"]},
'{vacancy_data['company_name']}', '{vacancy_data["company_name"]}',
'{vacancy_data['company_description'].replace("'", "''")}', '{vacancy_data["company_description"].replace("'", "''")}',
'{vacancy_data['area_name']}', '{vacancy_data["area_name"]}',
'{vacancy_data['metro_stations']}', '{vacancy_data["metro_stations"]}',
'{vacancy_data['address']}', '{vacancy_data["address"]}',
'{vacancy_data['professional_roles']}', '{vacancy_data["professional_roles"]}',
'{vacancy_data['contacts_name']}', '{vacancy_data["contacts_name"]}',
'{vacancy_data['contacts_email']}', '{vacancy_data["contacts_email"]}',
'{vacancy_data['contacts_phone']}', '{vacancy_data["contacts_phone"]}',
{vacancy_data['is_archived']}, {vacancy_data["is_archived"]},
{vacancy_data['premium']}, {vacancy_data["premium"]},
NOW(), NOW(),
'{vacancy_data['url']}', '{vacancy_data["url"]}',
NOW(), NOW(),
NOW() NOW()
) )
@ -301,12 +300,12 @@ def downgrade() -> None:
"""Remove sample vacancies.""" """Remove sample vacancies."""
# Remove the sample vacancies by their unique titles # Remove the sample vacancies by their unique titles
sample_titles = [ sample_titles = [
'Senior Python Developer', "Senior Python Developer",
'Frontend React Developer', "Frontend React Developer",
'DevOps Engineer', "DevOps Engineer",
'Junior Java Developer', "Junior Java Developer",
'Product Manager IT' "Product Manager IT",
] ]
for title in sample_titles: for title in sample_titles:
op.execute(f"DELETE FROM vacancy WHERE title = '{title}'") op.execute(f"DELETE FROM vacancy WHERE title = '{title}'")

View File

@ -5,23 +5,24 @@ Revises: 385d03e3281c
Create Date: 2025-09-02 20:01:52.904608 Create Date: 2025-09-02 20:01:52.904608
""" """
from typing import Sequence, Union
from collections.abc import Sequence
from alembic import op from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '96ffcf34e1de' revision: str = "96ffcf34e1de"
down_revision: Union[str, Sequence[str], None] = '385d03e3281c' down_revision: str | Sequence[str] | None = "385d03e3281c"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Update status column to use interviewstatus enum # Update status column to use interviewstatus enum
op.execute("ALTER TABLE interview_sessions ALTER COLUMN status TYPE interviewstatus USING status::interviewstatus") op.execute(
"ALTER TABLE interview_sessions ALTER COLUMN status TYPE interviewstatus USING status::interviewstatus"
)
def downgrade() -> None: def downgrade() -> None:

View File

@ -5,25 +5,26 @@ Revises: 772538626a9e
Create Date: 2025-09-04 12:16:56.495018 Create Date: 2025-09-04 12:16:56.495018
""" """
from typing import Sequence, Union
from collections.abc import Sequence
from alembic import op from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '9c60c15f7846' revision: str = "9c60c15f7846"
down_revision: Union[str, Sequence[str], None] = '772538626a9e' down_revision: str | Sequence[str] | None = "772538626a9e"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Create interview reports table with scoring fields.""" """Create interview reports table with scoring fields."""
# Create enum type for recommendation # Create enum type for recommendation
op.execute("CREATE TYPE recommendationtype AS ENUM ('STRONGLY_RECOMMEND', 'RECOMMEND', 'CONSIDER', 'REJECT')") op.execute(
"CREATE TYPE recommendationtype AS ENUM ('STRONGLY_RECOMMEND', 'RECOMMEND', 'CONSIDER', 'REJECT')"
)
# Create interview_reports table # Create interview_reports table
op.execute(""" op.execute("""
CREATE TABLE interview_reports ( CREATE TABLE interview_reports (
@ -84,13 +85,23 @@ def upgrade() -> None:
FOREIGN KEY (interview_session_id) REFERENCES interview_sessions(id) FOREIGN KEY (interview_session_id) REFERENCES interview_sessions(id)
) )
""") """)
# Create useful indexes # Create useful indexes
op.execute("CREATE INDEX idx_interview_reports_overall_score ON interview_reports (overall_score DESC)") op.execute(
op.execute("CREATE INDEX idx_interview_reports_recommendation ON interview_reports (recommendation)") "CREATE INDEX idx_interview_reports_overall_score ON interview_reports (overall_score DESC)"
op.execute("CREATE INDEX idx_interview_reports_technical_skills ON interview_reports (technical_skills_score DESC)") )
op.execute("CREATE INDEX idx_interview_reports_communication ON interview_reports (communication_score DESC)") op.execute(
op.execute("CREATE INDEX idx_interview_reports_session_id ON interview_reports (interview_session_id)") "CREATE INDEX idx_interview_reports_recommendation ON interview_reports (recommendation)"
)
op.execute(
"CREATE INDEX idx_interview_reports_technical_skills ON interview_reports (technical_skills_score DESC)"
)
op.execute(
"CREATE INDEX idx_interview_reports_communication ON interview_reports (communication_score DESC)"
)
op.execute(
"CREATE INDEX idx_interview_reports_session_id ON interview_reports (interview_session_id)"
)
def downgrade() -> None: def downgrade() -> None:

View File

@ -5,41 +5,56 @@ Revises: 53d8b753cb71
Create Date: 2025-09-03 18:04:49.726882 Create Date: 2025-09-03 18:04:49.726882
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = '9d415bf0ff2e' revision: str = "9d415bf0ff2e"
down_revision: Union[str, Sequence[str], None] = 'c2d48b31ee30' down_revision: str | Sequence[str] | None = "c2d48b31ee30"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Сначала создаем таблицу interview_sessions (если была удалена) # Сначала создаем таблицу interview_sessions (если была удалена)
op.create_table('interview_sessions', op.create_table(
sa.Column('resume_id', sa.Integer(), nullable=False), "interview_sessions",
sa.Column('room_name', sa.String(length=255), nullable=False), sa.Column("resume_id", sa.Integer(), nullable=False),
sa.Column('status', sa.Enum('created', 'active', 'completed', 'failed', name='interviewstatus', create_type=False), nullable=True), sa.Column("room_name", sa.String(length=255), nullable=False),
sa.Column('transcript', sa.Text(), nullable=True), sa.Column(
sa.Column('ai_feedback', sa.Text(), nullable=True), "status",
sa.Column('dialogue_history', sa.JSON(), nullable=True), sa.Enum(
sa.Column('ai_agent_pid', sa.Integer(), nullable=True), "created",
sa.Column('ai_agent_status', sa.String(), nullable=False), "active",
sa.Column('id', sa.Integer(), nullable=False), "completed",
sa.Column('started_at', sa.DateTime(), nullable=False), "failed",
sa.Column('completed_at', sa.DateTime(), nullable=True), name="interviewstatus",
sa.ForeignKeyConstraint(['resume_id'], ['resume.id'], ), create_type=False,
sa.PrimaryKeyConstraint('id'), ),
sa.UniqueConstraint('room_name') nullable=True,
),
sa.Column("transcript", sa.Text(), nullable=True),
sa.Column("ai_feedback", sa.Text(), nullable=True),
sa.Column("dialogue_history", sa.JSON(), nullable=True),
sa.Column("ai_agent_pid", sa.Integer(), nullable=True),
sa.Column("ai_agent_status", sa.String(), nullable=False),
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("started_at", sa.DateTime(), nullable=False),
sa.Column("completed_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(
["resume_id"],
["resume.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("room_name"),
) )
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# Удаляем всю таблицу # Удаляем всю таблицу
op.drop_table('interview_sessions') op.drop_table("interview_sessions")

View File

@ -1,71 +1,156 @@
"""initial """initial
Revision ID: a694f7c9e766 Revision ID: a694f7c9e766
Revises: Revises:
Create Date: 2025-08-30 19:48:53.070679 Create Date: 2025-08-30 19:48:53.070679
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
import sqlmodel import sqlmodel
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'a694f7c9e766' revision: str = "a694f7c9e766"
down_revision: Union[str, Sequence[str], None] = None down_revision: str | Sequence[str] | None = None
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.create_table('vacancy', op.create_table(
sa.Column('title', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False), "vacancy",
sa.Column('description', sqlmodel.sql.sqltypes.AutoString(), nullable=False), sa.Column(
sa.Column('key_skills', sqlmodel.sql.sqltypes.AutoString(), nullable=True), "title", sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False
sa.Column('employment_type', sa.Enum('FULL_TIME', 'PART_TIME', 'PROJECT', 'VOLUNTEER', 'PROBATION', name='employmenttype'), nullable=False), ),
sa.Column('experience', sa.Enum('NO_EXPERIENCE', 'BETWEEN_1_AND_3', 'BETWEEN_3_AND_6', 'MORE_THAN_6', name='experience'), nullable=False), sa.Column("description", sqlmodel.sql.sqltypes.AutoString(), nullable=False),
sa.Column('schedule', sa.Enum('FULL_DAY', 'SHIFT', 'FLEXIBLE', 'REMOTE', 'FLY_IN_FLY_OUT', name='schedule'), nullable=False), sa.Column("key_skills", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column('salary_from', sa.Integer(), nullable=True), sa.Column(
sa.Column('salary_to', sa.Integer(), nullable=True), "employment_type",
sa.Column('salary_currency', sqlmodel.sql.sqltypes.AutoString(length=3), nullable=True), sa.Enum(
sa.Column('gross_salary', sa.Boolean(), nullable=True), "FULL_TIME",
sa.Column('company_name', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False), "PART_TIME",
sa.Column('company_description', sqlmodel.sql.sqltypes.AutoString(), nullable=True), "PROJECT",
sa.Column('area_name', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False), "VOLUNTEER",
sa.Column('metro_stations', sqlmodel.sql.sqltypes.AutoString(), nullable=True), "PROBATION",
sa.Column('address', sqlmodel.sql.sqltypes.AutoString(), nullable=True), name="employmenttype",
sa.Column('professional_roles', sqlmodel.sql.sqltypes.AutoString(), nullable=True), ),
sa.Column('contacts_name', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=True), nullable=False,
sa.Column('contacts_email', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=True), ),
sa.Column('contacts_phone', sqlmodel.sql.sqltypes.AutoString(length=50), nullable=True), sa.Column(
sa.Column('is_archived', sa.Boolean(), nullable=False), "experience",
sa.Column('premium', sa.Boolean(), nullable=False), sa.Enum(
sa.Column('published_at', sa.DateTime(), nullable=True), "NO_EXPERIENCE",
sa.Column('url', sqlmodel.sql.sqltypes.AutoString(), nullable=True), "BETWEEN_1_AND_3",
sa.Column('id', sa.Integer(), nullable=False), "BETWEEN_3_AND_6",
sa.Column('created_at', sa.DateTime(), nullable=False), "MORE_THAN_6",
sa.Column('updated_at', sa.DateTime(), nullable=False), name="experience",
sa.PrimaryKeyConstraint('id') ),
nullable=False,
),
sa.Column(
"schedule",
sa.Enum(
"FULL_DAY",
"SHIFT",
"FLEXIBLE",
"REMOTE",
"FLY_IN_FLY_OUT",
name="schedule",
),
nullable=False,
),
sa.Column("salary_from", sa.Integer(), nullable=True),
sa.Column("salary_to", sa.Integer(), nullable=True),
sa.Column(
"salary_currency", sqlmodel.sql.sqltypes.AutoString(length=3), nullable=True
),
sa.Column("gross_salary", sa.Boolean(), nullable=True),
sa.Column(
"company_name", sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False
),
sa.Column(
"company_description", sqlmodel.sql.sqltypes.AutoString(), nullable=True
),
sa.Column(
"area_name", sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False
),
sa.Column("metro_stations", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column("address", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column(
"professional_roles", sqlmodel.sql.sqltypes.AutoString(), nullable=True
),
sa.Column(
"contacts_name", sqlmodel.sql.sqltypes.AutoString(length=255), nullable=True
),
sa.Column(
"contacts_email",
sqlmodel.sql.sqltypes.AutoString(length=255),
nullable=True,
),
sa.Column(
"contacts_phone", sqlmodel.sql.sqltypes.AutoString(length=50), nullable=True
),
sa.Column("is_archived", sa.Boolean(), nullable=False),
sa.Column("premium", sa.Boolean(), nullable=False),
sa.Column("published_at", sa.DateTime(), nullable=True),
sa.Column("url", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint("id"),
) )
op.create_table('resume', op.create_table(
sa.Column('vacancy_id', sa.Integer(), nullable=False), "resume",
sa.Column('applicant_name', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False), sa.Column("vacancy_id", sa.Integer(), nullable=False),
sa.Column('applicant_email', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False), sa.Column(
sa.Column('applicant_phone', sqlmodel.sql.sqltypes.AutoString(length=50), nullable=True), "applicant_name",
sa.Column('resume_file_url', sqlmodel.sql.sqltypes.AutoString(), nullable=False), sqlmodel.sql.sqltypes.AutoString(length=255),
sa.Column('cover_letter', sqlmodel.sql.sqltypes.AutoString(), nullable=True), nullable=False,
sa.Column('status', sa.Enum('PENDING', 'UNDER_REVIEW', 'INTERVIEW_SCHEDULED', 'INTERVIEWED', 'REJECTED', 'ACCEPTED', name='resumestatus'), nullable=False), ),
sa.Column('interview_report_url', sqlmodel.sql.sqltypes.AutoString(), nullable=True), sa.Column(
sa.Column('notes', sqlmodel.sql.sqltypes.AutoString(), nullable=True), "applicant_email",
sa.Column('id', sa.Integer(), nullable=False), sqlmodel.sql.sqltypes.AutoString(length=255),
sa.Column('created_at', sa.DateTime(), nullable=False), nullable=False,
sa.Column('updated_at', sa.DateTime(), nullable=False), ),
sa.ForeignKeyConstraint(['vacancy_id'], ['vacancy.id'], ), sa.Column(
sa.PrimaryKeyConstraint('id') "applicant_phone",
sqlmodel.sql.sqltypes.AutoString(length=50),
nullable=True,
),
sa.Column(
"resume_file_url", sqlmodel.sql.sqltypes.AutoString(), nullable=False
),
sa.Column("cover_letter", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column(
"status",
sa.Enum(
"PENDING",
"UNDER_REVIEW",
"INTERVIEW_SCHEDULED",
"INTERVIEWED",
"REJECTED",
"ACCEPTED",
name="resumestatus",
),
nullable=False,
),
sa.Column(
"interview_report_url", sqlmodel.sql.sqltypes.AutoString(), nullable=True
),
sa.Column("notes", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.ForeignKeyConstraint(
["vacancy_id"],
["vacancy.id"],
),
sa.PrimaryKeyConstraint("id"),
) )
# ### end Alembic commands ### # ### end Alembic commands ###
@ -73,6 +158,6 @@ def upgrade() -> None:
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.drop_table('resume') op.drop_table("resume")
op.drop_table('vacancy') op.drop_table("vacancy")
# ### end Alembic commands ### # ### end Alembic commands ###

View File

@ -5,17 +5,17 @@ Revises: c9bcdd2ddeeb
Create Date: 2025-09-03 23:45:13.221735 Create Date: 2025-09-03 23:45:13.221735
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'a816820baadb' revision: str = "a816820baadb"
down_revision: Union[str, Sequence[str], None] = 'c9bcdd2ddeeb' down_revision: str | Sequence[str] | None = "c9bcdd2ddeeb"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
@ -26,17 +26,20 @@ def upgrade() -> None:
ALTER COLUMN parsed_data TYPE TEXT USING parsed_data::TEXT, ALTER COLUMN parsed_data TYPE TEXT USING parsed_data::TEXT,
ALTER COLUMN interview_plan TYPE TEXT USING interview_plan::TEXT ALTER COLUMN interview_plan TYPE TEXT USING interview_plan::TEXT
""") """)
op.execute(""" op.execute("""
ALTER TABLE interview_sessions ALTER TABLE interview_sessions
ALTER COLUMN dialogue_history TYPE TEXT USING dialogue_history::TEXT ALTER COLUMN dialogue_history TYPE TEXT USING dialogue_history::TEXT
""") """)
# Also fix status column # Also fix status column
op.alter_column('interview_sessions', 'status', op.alter_column(
existing_type=sa.VARCHAR(length=50), "interview_sessions",
nullable=False, "status",
existing_server_default=sa.text("'created'::character varying")) existing_type=sa.VARCHAR(length=50),
nullable=False,
existing_server_default=sa.text("'created'::character varying"),
)
def downgrade() -> None: def downgrade() -> None:
@ -47,13 +50,16 @@ def downgrade() -> None:
ALTER COLUMN parsed_data TYPE JSON USING parsed_data::JSON, ALTER COLUMN parsed_data TYPE JSON USING parsed_data::JSON,
ALTER COLUMN interview_plan TYPE JSON USING interview_plan::JSON ALTER COLUMN interview_plan TYPE JSON USING interview_plan::JSON
""") """)
op.execute(""" op.execute("""
ALTER TABLE interview_sessions ALTER TABLE interview_sessions
ALTER COLUMN dialogue_history TYPE JSON USING dialogue_history::JSON ALTER COLUMN dialogue_history TYPE JSON USING dialogue_history::JSON
""") """)
op.alter_column('interview_sessions', 'status', op.alter_column(
existing_type=sa.VARCHAR(length=50), "interview_sessions",
nullable=True, "status",
existing_server_default=sa.text("'created'::character varying")) existing_type=sa.VARCHAR(length=50),
nullable=True,
existing_server_default=sa.text("'created'::character varying"),
)

View File

@ -5,42 +5,51 @@ Revises: 7ffa784ab042
Create Date: 2025-08-30 20:10:57.802953 Create Date: 2025-08-30 20:10:57.802953
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
import sqlmodel import sqlmodel
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'ae966b3e742e' revision: str = "ae966b3e742e"
down_revision: Union[str, Sequence[str], None] = '7ffa784ab042' down_revision: str | Sequence[str] | None = "7ffa784ab042"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.create_table('session', op.create_table(
sa.Column('session_id', sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False), "session",
sa.Column('user_agent', sqlmodel.sql.sqltypes.AutoString(length=512), nullable=True), sa.Column(
sa.Column('ip_address', sqlmodel.sql.sqltypes.AutoString(length=45), nullable=True), "session_id", sqlmodel.sql.sqltypes.AutoString(length=255), nullable=False
sa.Column('is_active', sa.Boolean(), nullable=False), ),
sa.Column('expires_at', sa.DateTime(), nullable=False), sa.Column(
sa.Column('last_activity', sa.DateTime(), nullable=False), "user_agent", sqlmodel.sql.sqltypes.AutoString(length=512), nullable=True
sa.Column('id', sa.Integer(), nullable=False), ),
sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column(
sa.Column('updated_at', sa.DateTime(), nullable=False), "ip_address", sqlmodel.sql.sqltypes.AutoString(length=45), nullable=True
sa.PrimaryKeyConstraint('id') ),
sa.Column("is_active", sa.Boolean(), nullable=False),
sa.Column("expires_at", sa.DateTime(), nullable=False),
sa.Column("last_activity", sa.DateTime(), nullable=False),
sa.Column("id", sa.Integer(), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("updated_at", sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint("id"),
)
op.create_index(
op.f("ix_session_session_id"), "session", ["session_id"], unique=True
) )
op.create_index(op.f('ix_session_session_id'), 'session', ['session_id'], unique=True)
# ### end Alembic commands ### # ### end Alembic commands ###
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_session_session_id'), table_name='session') op.drop_index(op.f("ix_session_session_id"), table_name="session")
op.drop_table('session') op.drop_table("session")
# ### end Alembic commands ### # ### end Alembic commands ###

View File

@ -5,44 +5,74 @@ Revises: de11b016b35a
Create Date: 2025-09-03 17:55:41.653125 Create Date: 2025-09-03 17:55:41.653125
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
from sqlalchemy.dialects import postgresql from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'c2d48b31ee30' revision: str = "c2d48b31ee30"
down_revision: Union[str, Sequence[str], None] = 'de11b016b35a' down_revision: str | Sequence[str] | None = "de11b016b35a"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_interview_sessions_id'), table_name='interview_sessions') op.drop_index(op.f("ix_interview_sessions_id"), table_name="interview_sessions")
op.drop_table('interview_sessions') op.drop_table("interview_sessions")
# ### end Alembic commands ### # ### end Alembic commands ###
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.create_table('interview_sessions', op.create_table(
sa.Column('id', sa.INTEGER(), autoincrement=True, nullable=False), "interview_sessions",
sa.Column('resume_id', sa.INTEGER(), autoincrement=False, nullable=False), sa.Column("id", sa.INTEGER(), autoincrement=True, nullable=False),
sa.Column('room_name', sa.VARCHAR(length=255), autoincrement=False, nullable=False), sa.Column("resume_id", sa.INTEGER(), autoincrement=False, nullable=False),
sa.Column('status', postgresql.ENUM('created', 'active', 'completed', 'failed', name='interviewstatus'), autoincrement=False, nullable=False), sa.Column(
sa.Column('transcript', sa.TEXT(), autoincrement=False, nullable=True), "room_name", sa.VARCHAR(length=255), autoincrement=False, nullable=False
sa.Column('ai_feedback', sa.TEXT(), autoincrement=False, nullable=True), ),
sa.Column('started_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=False), sa.Column(
sa.Column('completed_at', postgresql.TIMESTAMP(), autoincrement=False, nullable=True), "status",
sa.Column('ai_agent_pid', sa.INTEGER(), autoincrement=False, nullable=True), postgresql.ENUM(
sa.Column('ai_agent_status', sa.VARCHAR(), server_default=sa.text("'not_started'::character varying"), autoincrement=False, nullable=False), "created", "active", "completed", "failed", name="interviewstatus"
sa.ForeignKeyConstraint(['resume_id'], ['resume.id'], name=op.f('interview_sessions_resume_id_fkey')), ),
sa.PrimaryKeyConstraint('id', name=op.f('interview_sessions_pkey')), autoincrement=False,
sa.UniqueConstraint('room_name', name=op.f('interview_sessions_room_name_key'), postgresql_include=[], postgresql_nulls_not_distinct=False) nullable=False,
),
sa.Column("transcript", sa.TEXT(), autoincrement=False, nullable=True),
sa.Column("ai_feedback", sa.TEXT(), autoincrement=False, nullable=True),
sa.Column(
"started_at", postgresql.TIMESTAMP(), autoincrement=False, nullable=False
),
sa.Column(
"completed_at", postgresql.TIMESTAMP(), autoincrement=False, nullable=True
),
sa.Column("ai_agent_pid", sa.INTEGER(), autoincrement=False, nullable=True),
sa.Column(
"ai_agent_status",
sa.VARCHAR(),
server_default=sa.text("'not_started'::character varying"),
autoincrement=False,
nullable=False,
),
sa.ForeignKeyConstraint(
["resume_id"], ["resume.id"], name=op.f("interview_sessions_resume_id_fkey")
),
sa.PrimaryKeyConstraint("id", name=op.f("interview_sessions_pkey")),
sa.UniqueConstraint(
"room_name",
name=op.f("interview_sessions_room_name_key"),
postgresql_include=[],
postgresql_nulls_not_distinct=False,
),
)
op.create_index(
op.f("ix_interview_sessions_id"), "interview_sessions", ["id"], unique=False
) )
op.create_index(op.f('ix_interview_sessions_id'), 'interview_sessions', ['id'], unique=False)
# ### end Alembic commands ### # ### end Alembic commands ###

View File

@ -5,42 +5,56 @@ Revises: 9d415bf0ff2e
Create Date: 2025-09-03 18:07:59.433986 Create Date: 2025-09-03 18:07:59.433986
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'c9bcdd2ddeeb' revision: str = "c9bcdd2ddeeb"
down_revision: Union[str, Sequence[str], None] = '9d415bf0ff2e' down_revision: str | Sequence[str] | None = "9d415bf0ff2e"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Создаем таблицу interview_sessions заново # Создаем таблицу interview_sessions заново
op.execute("DROP TABLE IF EXISTS interview_sessions CASCADE") op.execute("DROP TABLE IF EXISTS interview_sessions CASCADE")
op.create_table('interview_sessions', op.create_table(
sa.Column('id', sa.Integer(), nullable=False), "interview_sessions",
sa.Column('resume_id', sa.Integer(), nullable=False), sa.Column("id", sa.Integer(), nullable=False),
sa.Column('room_name', sa.String(length=255), nullable=False), sa.Column("resume_id", sa.Integer(), nullable=False),
sa.Column('status', sa.String(50), nullable=True, server_default='created'), sa.Column("room_name", sa.String(length=255), nullable=False),
sa.Column('transcript', sa.Text(), nullable=True), sa.Column("status", sa.String(50), nullable=True, server_default="created"),
sa.Column('ai_feedback', sa.Text(), nullable=True), sa.Column("transcript", sa.Text(), nullable=True),
sa.Column('dialogue_history', sa.JSON(), nullable=True), sa.Column("ai_feedback", sa.Text(), nullable=True),
sa.Column('ai_agent_pid', sa.Integer(), nullable=True), sa.Column("dialogue_history", sa.JSON(), nullable=True),
sa.Column('ai_agent_status', sa.String(50), nullable=False, server_default='not_started'), sa.Column("ai_agent_pid", sa.Integer(), nullable=True),
sa.Column('started_at', sa.DateTime(), nullable=False, server_default=sa.text('CURRENT_TIMESTAMP')), sa.Column(
sa.Column('completed_at', sa.DateTime(), nullable=True), "ai_agent_status",
sa.ForeignKeyConstraint(['resume_id'], ['resume.id'], ), sa.String(50),
sa.PrimaryKeyConstraint('id'), nullable=False,
sa.UniqueConstraint('room_name') server_default="not_started",
),
sa.Column(
"started_at",
sa.DateTime(),
nullable=False,
server_default=sa.text("CURRENT_TIMESTAMP"),
),
sa.Column("completed_at", sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(
["resume_id"],
["resume.id"],
),
sa.PrimaryKeyConstraint("id"),
sa.UniqueConstraint("room_name"),
) )
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
op.drop_table('interview_sessions') op.drop_table("interview_sessions")

View File

@ -5,18 +5,18 @@ Revises: 4e19b8fe4a88
Create Date: 2025-09-02 14:45:30.749202 Create Date: 2025-09-02 14:45:30.749202
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
import sqlmodel import sqlmodel
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'dba37152ae9a' revision: str = "dba37152ae9a"
down_revision: Union[str, Sequence[str], None] = '4e19b8fe4a88' down_revision: str | Sequence[str] | None = "4e19b8fe4a88"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
@ -25,19 +25,22 @@ def upgrade() -> None:
op.execute("ALTER TYPE resumestatus ADD VALUE IF NOT EXISTS 'PARSING'") op.execute("ALTER TYPE resumestatus ADD VALUE IF NOT EXISTS 'PARSING'")
op.execute("ALTER TYPE resumestatus ADD VALUE IF NOT EXISTS 'PARSED'") op.execute("ALTER TYPE resumestatus ADD VALUE IF NOT EXISTS 'PARSED'")
op.execute("ALTER TYPE resumestatus ADD VALUE IF NOT EXISTS 'PARSE_FAILED'") op.execute("ALTER TYPE resumestatus ADD VALUE IF NOT EXISTS 'PARSE_FAILED'")
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.add_column('resume', sa.Column('parsed_data', sa.JSON(), nullable=True)) op.add_column("resume", sa.Column("parsed_data", sa.JSON(), nullable=True))
op.add_column('resume', sa.Column('parse_error', sqlmodel.sql.sqltypes.AutoString(), nullable=True)) op.add_column(
"resume",
sa.Column("parse_error", sqlmodel.sql.sqltypes.AutoString(), nullable=True),
)
# ### end Alembic commands ### # ### end Alembic commands ###
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ### # ### commands auto generated by Alembic - please adjust! ###
op.drop_column('resume', 'parse_error') op.drop_column("resume", "parse_error")
op.drop_column('resume', 'parsed_data') op.drop_column("resume", "parsed_data")
# ### end Alembic commands ### # ### end Alembic commands ###
# Note: Cannot remove ENUM values in PostgreSQL, they are permanent once added # Note: Cannot remove ENUM values in PostgreSQL, they are permanent once added
# If needed, would require recreating the ENUM type # If needed, would require recreating the ENUM type

View File

@ -5,28 +5,35 @@ Revises: 1a2cda4df181
Create Date: 2025-09-03 00:02:24.263636 Create Date: 2025-09-03 00:02:24.263636
""" """
from typing import Sequence, Union
from alembic import op from collections.abc import Sequence
import sqlalchemy as sa import sqlalchemy as sa
from alembic import op
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision: str = 'de11b016b35a' revision: str = "de11b016b35a"
down_revision: Union[str, Sequence[str], None] = '1a2cda4df181' down_revision: str | Sequence[str] | None = "1a2cda4df181"
branch_labels: Union[str, Sequence[str], None] = None branch_labels: str | Sequence[str] | None = None
depends_on: Union[str, Sequence[str], None] = None depends_on: str | Sequence[str] | None = None
def upgrade() -> None: def upgrade() -> None:
"""Upgrade schema.""" """Upgrade schema."""
# Add AI agent process tracking columns # Add AI agent process tracking columns
op.add_column('interview_sessions', sa.Column('ai_agent_pid', sa.Integer(), nullable=True)) op.add_column(
op.add_column('interview_sessions', sa.Column('ai_agent_status', sa.String(), server_default='not_started', nullable=False)) "interview_sessions", sa.Column("ai_agent_pid", sa.Integer(), nullable=True)
)
op.add_column(
"interview_sessions",
sa.Column(
"ai_agent_status", sa.String(), server_default="not_started", nullable=False
),
)
def downgrade() -> None: def downgrade() -> None:
"""Downgrade schema.""" """Downgrade schema."""
# Drop AI agent process tracking columns # Drop AI agent process tracking columns
op.drop_column('interview_sessions', 'ai_agent_status') op.drop_column("interview_sessions", "ai_agent_status")
op.drop_column('interview_sessions', 'ai_agent_pid') op.drop_column("interview_sessions", "ai_agent_pid")

View File

@ -41,19 +41,35 @@ dev-dependencies = [
"pytest>=7.4.0", "pytest>=7.4.0",
"pytest-asyncio>=0.21.0", "pytest-asyncio>=0.21.0",
"httpx>=0.25.0", "httpx>=0.25.0",
"black>=23.0.0",
"isort>=5.12.0",
"flake8>=6.0.0",
"mypy>=1.7.0", "mypy>=1.7.0",
"ruff>=0.12.12",
] ]
[tool.black] [tool.ruff]
line-length = 88 line-length = 88
target-version = ['py311'] target-version = "py311"
[tool.isort] [tool.ruff.lint]
profile = "black" # Enable equivalent of flake8 rules
line_length = 88 select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # Pyflakes
"I", # isort
"B", # flake8-bugbear
"C4", # flake8-comprehensions
"UP", # pyupgrade
]
ignore = [
"E501", # line too long, handled by formatter
]
[tool.ruff.format]
# Equivalent to black configuration
quote-style = "double"
indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"
[tool.mypy] [tool.mypy]
python_version = "3.11" python_version = "3.11"

View File

@ -1,13 +1,14 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
"""Quick API testing script""" """Quick API testing script"""
import requests
import json
import time import time
from pathlib import Path from pathlib import Path
import requests
BASE_URL = "http://localhost:8000" BASE_URL = "http://localhost:8000"
def test_health(): def test_health():
"""Test API health""" """Test API health"""
try: try:
@ -18,6 +19,7 @@ def test_health():
print(f"API not available: {str(e)}") print(f"API not available: {str(e)}")
return False return False
def upload_test_resume(): def upload_test_resume():
"""Upload test resume""" """Upload test resume"""
try: try:
@ -26,62 +28,63 @@ def upload_test_resume():
if not resume_path.exists(): if not resume_path.exists():
print("test_resume.txt not found!") print("test_resume.txt not found!")
return None return None
# Upload file # Upload file
with open(resume_path, 'r', encoding='utf-8') as f: with open(resume_path, encoding="utf-8") as f:
files = {'file': (resume_path.name, f, 'text/plain')} files = {"file": (resume_path.name, f, "text/plain")}
data = { data = {
'applicant_name': 'Иванов Иван Иванович', "applicant_name": "Иванов Иван Иванович",
'applicant_email': 'ivan.ivanov@example.com', "applicant_email": "ivan.ivanov@example.com",
'applicant_phone': '+7 (999) 123-45-67', "applicant_phone": "+7 (999) 123-45-67",
'vacancy_id': '1' "vacancy_id": "1",
} }
response = requests.post( response = requests.post(
f"{BASE_URL}/resume/upload", f"{BASE_URL}/resume/upload", files=files, data=data, timeout=30
files=files,
data=data,
timeout=30
) )
print(f"Resume upload: {response.status_code}") print(f"Resume upload: {response.status_code}")
if response.status_code == 200: if response.status_code == 200:
result = response.json() result = response.json()
print(f"Resume ID: {result.get('resume_id')}") print(f"Resume ID: {result.get('resume_id')}")
return result.get('resume_id') return result.get("resume_id")
else: else:
print(f"Upload failed: {response.text}") print(f"Upload failed: {response.text}")
return None return None
except Exception as e: except Exception as e:
print(f"Upload error: {str(e)}") print(f"Upload error: {str(e)}")
return None return None
def check_resume_processing(resume_id): def check_resume_processing(resume_id):
"""Check resume processing status""" """Check resume processing status"""
try: try:
response = requests.get(f"{BASE_URL}/resume/{resume_id}") response = requests.get(f"{BASE_URL}/resume/{resume_id}")
print(f"Resume status check: {response.status_code}") print(f"Resume status check: {response.status_code}")
if response.status_code == 200: if response.status_code == 200:
resume = response.json() resume = response.json()
print(f"Status: {resume.get('status')}") print(f"Status: {resume.get('status')}")
print(f"Has interview plan: {'interview_plan' in resume and resume['interview_plan'] is not None}") print(
f"Has interview plan: {'interview_plan' in resume and resume['interview_plan'] is not None}"
)
return resume return resume
else: else:
print(f"Resume check failed: {response.text}") print(f"Resume check failed: {response.text}")
return None return None
except Exception as e: except Exception as e:
print(f"Status check error: {str(e)}") print(f"Status check error: {str(e)}")
return None return None
def create_interview_session(resume_id): def create_interview_session(resume_id):
"""Create interview session""" """Create interview session"""
try: try:
response = requests.post(f"{BASE_URL}/interview/{resume_id}/token") response = requests.post(f"{BASE_URL}/interview/{resume_id}/token")
print(f"Interview session creation: {response.status_code}") print(f"Interview session creation: {response.status_code}")
if response.status_code == 200: if response.status_code == 200:
result = response.json() result = response.json()
print(f"Room: {result.get('room_name')}") print(f"Room: {result.get('room_name')}")
@ -90,97 +93,102 @@ def create_interview_session(resume_id):
else: else:
print(f"Interview creation failed: {response.text}") print(f"Interview creation failed: {response.text}")
return None return None
except Exception as e: except Exception as e:
print(f"Interview creation error: {str(e)}") print(f"Interview creation error: {str(e)}")
return None return None
def check_admin_processes(): def check_admin_processes():
"""Check admin process monitoring""" """Check admin process monitoring"""
try: try:
response = requests.get(f"{BASE_URL}/admin/interview-processes") response = requests.get(f"{BASE_URL}/admin/interview-processes")
print(f"Admin processes check: {response.status_code}") print(f"Admin processes check: {response.status_code}")
if response.status_code == 200: if response.status_code == 200:
result = response.json() result = response.json()
print(f"Active sessions: {result.get('total_active_sessions')}") print(f"Active sessions: {result.get('total_active_sessions')}")
for proc in result.get('processes', []): for proc in result.get("processes", []):
print(f" Session {proc['session_id']}: PID {proc['pid']}, Running: {proc['is_running']}") print(
f" Session {proc['session_id']}: PID {proc['pid']}, Running: {proc['is_running']}"
)
return result return result
else: else:
print(f"Admin check failed: {response.text}") print(f"Admin check failed: {response.text}")
return None return None
except Exception as e: except Exception as e:
print(f"Admin check error: {str(e)}") print(f"Admin check error: {str(e)}")
return None return None
def main(): def main():
"""Run quick API tests""" """Run quick API tests"""
print("=" * 50) print("=" * 50)
print("QUICK API TEST") print("QUICK API TEST")
print("=" * 50) print("=" * 50)
# 1. Check if API is running # 1. Check if API is running
if not test_health(): if not test_health():
print("❌ API not running! Start with: uvicorn app.main:app --reload") print("❌ API not running! Start with: uvicorn app.main:app --reload")
return return
print("✅ API is running") print("✅ API is running")
# 2. Upload test resume # 2. Upload test resume
print("\n--- Testing Resume Upload ---") print("\n--- Testing Resume Upload ---")
resume_id = upload_test_resume() resume_id = upload_test_resume()
if not resume_id: if not resume_id:
print("❌ Resume upload failed!") print("❌ Resume upload failed!")
return return
print(f"✅ Resume uploaded with ID: {resume_id}") print(f"✅ Resume uploaded with ID: {resume_id}")
# 3. Wait for processing and check status # 3. Wait for processing and check status
print("\n--- Checking Resume Processing ---") print("\n--- Checking Resume Processing ---")
print("Waiting 10 seconds for Celery processing...") print("Waiting 10 seconds for Celery processing...")
time.sleep(10) time.sleep(10)
resume_data = check_resume_processing(resume_id) resume_data = check_resume_processing(resume_id)
if not resume_data: if not resume_data:
print("❌ Could not check resume status!") print("❌ Could not check resume status!")
return return
if resume_data.get('status') == 'parsed': if resume_data.get("status") == "parsed":
print("✅ Resume processed successfully") print("✅ Resume processed successfully")
else: else:
print(f"⚠️ Resume status: {resume_data.get('status')}") print(f"⚠️ Resume status: {resume_data.get('status')}")
# 4. Create interview session # 4. Create interview session
print("\n--- Testing Interview Session ---") print("\n--- Testing Interview Session ---")
interview_data = create_interview_session(resume_id) interview_data = create_interview_session(resume_id)
if interview_data: if interview_data:
print("✅ Interview session created") print("✅ Interview session created")
else: else:
print("❌ Interview session creation failed") print("❌ Interview session creation failed")
# 5. Check admin monitoring # 5. Check admin monitoring
print("\n--- Testing Admin Monitoring ---") print("\n--- Testing Admin Monitoring ---")
admin_data = check_admin_processes() admin_data = check_admin_processes()
if admin_data: if admin_data:
print("✅ Admin monitoring works") print("✅ Admin monitoring works")
else: else:
print("❌ Admin monitoring failed") print("❌ Admin monitoring failed")
print("\n" + "=" * 50) print("\n" + "=" * 50)
print("QUICK TEST COMPLETED") print("QUICK TEST COMPLETED")
print("=" * 50) print("=" * 50)
print("\nNext steps:") print("\nNext steps:")
print("1. Check Celery worker logs for task processing") print("1. Check Celery worker logs for task processing")
print("2. Inspect database for interview_plan data") print("2. Inspect database for interview_plan data")
print("3. For voice testing, start LiveKit server") print("3. For voice testing, start LiveKit server")
print("4. Monitor system with: curl http://localhost:8000/admin/system-stats") print("4. Monitor system with: curl http://localhost:8000/admin/system-stats")
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -1,5 +1,4 @@
from .database import VectorStore
from .llm import ChatModel, EmbeddingsModel from .llm import ChatModel, EmbeddingsModel
from .service import RagService from .service import RagService
__all__ = ['RagService', 'ChatModel', 'EmbeddingsModel'] __all__ = ["RagService", "ChatModel", "EmbeddingsModel"]

View File

@ -1,3 +1,3 @@
from .model import VectorStoreModel as VectorStore from .model import VectorStoreModel as VectorStore
__all__ = ['VectorStore'] __all__ = ["VectorStore"]

View File

@ -1,3 +1,3 @@
from .model import ChatModel, EmbeddingsModel from .model import ChatModel, EmbeddingsModel
__all__ = ['ChatModel', 'EmbeddingsModel'] __all__ = ["ChatModel", "EmbeddingsModel"]

View File

@ -1,10 +1,11 @@
import json import json
import pdfplumber
import os import os
from typing import Dict, Any from typing import Any
import pdfplumber
from langchain.schema import HumanMessage, SystemMessage
from langchain_core.embeddings import Embeddings from langchain_core.embeddings import Embeddings
from langchain_core.language_models import BaseChatModel from langchain_core.language_models import BaseChatModel
from langchain.schema import HumanMessage, SystemMessage
try: try:
from docx import Document from docx import Document
@ -16,6 +17,7 @@ try:
except ImportError: except ImportError:
docx2txt = None docx2txt = None
class EmbeddingsModel: class EmbeddingsModel:
def __init__(self, model: Embeddings): def __init__(self, model: Embeddings):
self.model = model self.model = model
@ -23,6 +25,7 @@ class EmbeddingsModel:
def get_model(self): def get_model(self):
return self.model return self.model
class ChatModel: class ChatModel:
def __init__(self, model: BaseChatModel): def __init__(self, model: BaseChatModel):
self.model = model self.model = model
@ -30,13 +33,14 @@ class ChatModel:
def get_llm(self): def get_llm(self):
return self.model return self.model
class ResumeParser: class ResumeParser:
def __init__(self, chat_model: ChatModel): def __init__(self, chat_model: ChatModel):
self.llm = chat_model.get_llm() self.llm = chat_model.get_llm()
self.resume_prompt = """ self.resume_prompt = """
Проанализируй текст резюме и извлеки из него структурированные данные в JSON формате. Проанализируй текст резюме и извлеки из него структурированные данные в JSON формате.
Верни только JSON без дополнительных комментариев. Верни только JSON без дополнительных комментариев.
Формат ответа: Формат ответа:
{{ {{
"name": "Имя кандидата", "name": "Имя кандидата",
@ -55,7 +59,7 @@ class ResumeParser:
"education": "Образование", "education": "Образование",
"summary": "Краткое резюме о кандидате" "summary": "Краткое резюме о кандидате"
}} }}
Текст резюме: Текст резюме:
{resume_text} {resume_text}
""" """
@ -64,16 +68,16 @@ class ResumeParser:
"""Извлекает текст из PDF файла""" """Извлекает текст из PDF файла"""
try: try:
with pdfplumber.open(file_path) as pdf: with pdfplumber.open(file_path) as pdf:
text = '\n'.join([page.extract_text() or '' for page in pdf.pages]) text = "\n".join([page.extract_text() or "" for page in pdf.pages])
return text.strip() return text.strip()
except Exception as e: except Exception as e:
raise Exception(f"Ошибка при чтении PDF: {str(e)}") raise Exception(f"Ошибка при чтении PDF: {str(e)}") from e
def extract_text_from_docx(self, file_path: str) -> str: def extract_text_from_docx(self, file_path: str) -> str:
"""Извлекает текст из DOCX файла""" """Извлекает текст из DOCX файла"""
try: try:
print(f"[DEBUG] Extracting DOCX text from: {file_path}") print(f"[DEBUG] Extracting DOCX text from: {file_path}")
if docx2txt: if docx2txt:
# Предпочитаем docx2txt для простого извлечения текста # Предпочитаем docx2txt для простого извлечения текста
print("[DEBUG] Using docx2txt") print("[DEBUG] Using docx2txt")
@ -83,19 +87,21 @@ class ResumeParser:
return text.strip() return text.strip()
else: else:
print("[DEBUG] docx2txt returned empty text") print("[DEBUG] docx2txt returned empty text")
if Document: if Document:
# Используем python-docx как fallback # Используем python-docx как fallback
print("[DEBUG] Using python-docx as fallback") print("[DEBUG] Using python-docx as fallback")
doc = Document(file_path) doc = Document(file_path)
text = '\n'.join([paragraph.text for paragraph in doc.paragraphs]) text = "\n".join([paragraph.text for paragraph in doc.paragraphs])
print(f"[DEBUG] Extracted {len(text)} characters using python-docx") print(f"[DEBUG] Extracted {len(text)} characters using python-docx")
return text.strip() return text.strip()
raise Exception("Библиотеки для чтения DOCX не установлены (docx2txt или python-docx)") raise Exception(
"Библиотеки для чтения DOCX не установлены (docx2txt или python-docx)"
)
except Exception as e: except Exception as e:
print(f"[DEBUG] DOCX extraction failed: {str(e)}") print(f"[DEBUG] DOCX extraction failed: {str(e)}")
raise Exception(f"Ошибка при чтении DOCX: {str(e)}") raise Exception(f"Ошибка при чтении DOCX: {str(e)}") from e
def extract_text_from_doc(self, file_path: str) -> str: def extract_text_from_doc(self, file_path: str) -> str:
"""Извлекает текст из DOC файла""" """Извлекает текст из DOC файла"""
@ -104,103 +110,114 @@ class ResumeParser:
if Document: if Document:
try: try:
doc = Document(file_path) doc = Document(file_path)
text = '\n'.join([paragraph.text for paragraph in doc.paragraphs]) text = "\n".join([paragraph.text for paragraph in doc.paragraphs])
return text.strip() return text.strip()
except: except Exception:
# Если python-docx не может прочитать .doc, пытаемся использовать системные утилиты # Если python-docx не может прочитать .doc, пытаемся использовать системные утилиты
pass pass
# Попытка использовать системную команду antiword (для Linux/Mac) # Попытка использовать системную команду antiword (для Linux/Mac)
import subprocess import subprocess
try: try:
result = subprocess.run(['antiword', file_path], capture_output=True, text=True) result = subprocess.run(
["antiword", file_path], capture_output=True, text=True
)
if result.returncode == 0: if result.returncode == 0:
return result.stdout.strip() return result.stdout.strip()
except FileNotFoundError: except FileNotFoundError:
pass pass
raise Exception("Не удалось найти подходящий инструмент для чтения DOC файлов. Рекомендуется использовать DOCX формат.") raise Exception(
"Не удалось найти подходящий инструмент для чтения DOC файлов. Рекомендуется использовать DOCX формат."
)
except Exception as e: except Exception as e:
raise Exception(f"Ошибка при чтении DOC: {str(e)}") raise Exception(f"Ошибка при чтении DOC: {str(e)}") from e
def extract_text_from_txt(self, file_path: str) -> str: def extract_text_from_txt(self, file_path: str) -> str:
"""Извлекает текст из TXT файла""" """Извлекает текст из TXT файла"""
try: try:
# Попробуем разные кодировки # Попробуем разные кодировки
encodings = ['utf-8', 'cp1251', 'latin-1', 'cp1252'] encodings = ["utf-8", "cp1251", "latin-1", "cp1252"]
for encoding in encodings: for encoding in encodings:
try: try:
with open(file_path, 'r', encoding=encoding) as file: with open(file_path, encoding=encoding) as file:
text = file.read() text = file.read()
return text.strip() return text.strip()
except UnicodeDecodeError: except UnicodeDecodeError:
continue continue
raise Exception("Не удалось определить кодировку текстового файла") raise Exception("Не удалось определить кодировку текстового файла")
except Exception as e: except Exception as e:
raise Exception(f"Ошибка при чтении TXT: {str(e)}") raise Exception(f"Ошибка при чтении TXT: {str(e)}") from e
def extract_text_from_file(self, file_path: str) -> str: def extract_text_from_file(self, file_path: str) -> str:
"""Универсальный метод извлечения текста из файла""" """Универсальный метод извлечения текста из файла"""
if not os.path.exists(file_path): if not os.path.exists(file_path):
raise Exception(f"Файл не найден: {file_path}") raise Exception(f"Файл не найден: {file_path}")
# Определяем расширение файла # Определяем расширение файла
_, ext = os.path.splitext(file_path.lower()) _, ext = os.path.splitext(file_path.lower())
# Добавляем отладочную информацию # Добавляем отладочную информацию
print(f"[DEBUG] Parsing file: {file_path}, detected extension: {ext}") print(f"[DEBUG] Parsing file: {file_path}, detected extension: {ext}")
if ext == '.pdf': if ext == ".pdf":
return self.extract_text_from_pdf(file_path) return self.extract_text_from_pdf(file_path)
elif ext == '.docx': elif ext == ".docx":
return self.extract_text_from_docx(file_path) return self.extract_text_from_docx(file_path)
elif ext == '.doc': elif ext == ".doc":
return self.extract_text_from_doc(file_path) return self.extract_text_from_doc(file_path)
elif ext == '.txt': elif ext == ".txt":
return self.extract_text_from_txt(file_path) return self.extract_text_from_txt(file_path)
else: else:
raise Exception(f"Неподдерживаемый формат файла: {ext}. Поддерживаемые форматы: PDF, DOCX, DOC, TXT") raise Exception(
f"Неподдерживаемый формат файла: {ext}. Поддерживаемые форматы: PDF, DOCX, DOC, TXT"
)
def parse_resume_text(self, resume_text: str) -> Dict[str, Any]: def parse_resume_text(self, resume_text: str) -> dict[str, Any]:
"""Парсит текст резюме через LLM""" """Парсит текст резюме через LLM"""
try: try:
messages = [ messages = [
SystemMessage(content="Ты эксперт по анализу резюме. Извлекай данные точно в указанном JSON формате."), SystemMessage(
HumanMessage(content=self.resume_prompt.format(resume_text=resume_text)) content="Ты эксперт по анализу резюме. Извлекай данные точно в указанном JSON формате."
),
HumanMessage(
content=self.resume_prompt.format(resume_text=resume_text)
),
] ]
response = self.llm.invoke(messages) response = self.llm.invoke(messages)
# Извлекаем JSON из ответа # Извлекаем JSON из ответа
response_text = response.content.strip() response_text = response.content.strip()
# Пытаемся найти JSON в ответе # Пытаемся найти JSON в ответе
if response_text.startswith('{') and response_text.endswith('}'): if response_text.startswith("{") and response_text.endswith("}"):
return json.loads(response_text) return json.loads(response_text)
else: else:
# Ищем JSON внутри текста # Ищем JSON внутри текста
start = response_text.find('{') start = response_text.find("{")
end = response_text.rfind('}') + 1 end = response_text.rfind("}") + 1
if start != -1 and end > start: if start != -1 and end > start:
json_str = response_text[start:end] json_str = response_text[start:end]
return json.loads(json_str) return json.loads(json_str)
else: else:
raise ValueError("JSON не найден в ответе LLM") raise ValueError("JSON не найден в ответе LLM")
except json.JSONDecodeError as e:
raise Exception(f"Ошибка парсинга JSON из ответа LLM: {str(e)}")
except Exception as e:
raise Exception(f"Ошибка при обращении к LLM: {str(e)}")
def parse_resume_from_file(self, file_path: str) -> Dict[str, Any]: except json.JSONDecodeError as e:
raise Exception(f"Ошибка парсинга JSON из ответа LLM: {str(e)}") from e
except Exception as e:
raise Exception(f"Ошибка при обращении к LLM: {str(e)}") from e
def parse_resume_from_file(self, file_path: str) -> dict[str, Any]:
"""Полный цикл парсинга резюме из файла""" """Полный цикл парсинга резюме из файла"""
# Шаг 1: Извлекаем текст из файла (поддерживаем PDF, DOCX, DOC, TXT) # Шаг 1: Извлекаем текст из файла (поддерживаем PDF, DOCX, DOC, TXT)
resume_text = self.extract_text_from_file(file_path) resume_text = self.extract_text_from_file(file_path)
if not resume_text: if not resume_text:
raise Exception("Не удалось извлечь текст из файла") raise Exception("Не удалось извлечь текст из файла")
# Шаг 2: Парсим через LLM # Шаг 2: Парсим через LLM
return self.parse_resume_text(resume_text) return self.parse_resume_text(resume_text)

View File

@ -1,65 +1,71 @@
import json import json
from datetime import datetime
from typing import List, Optional
import redis
from sqlmodel import select
from sqlalchemy.ext.asyncio import AsyncSession
from langchain.schema import HumanMessage, AIMessage import redis
from langchain.memory import ConversationSummaryBufferMemory from langchain.memory import ConversationSummaryBufferMemory
from langchain.schema import AIMessage, HumanMessage
from sqlalchemy.ext.asyncio import AsyncSession
from rag.settings import settings from rag.settings import settings
class ChatMemoryManager: class ChatMemoryManager:
def __init__(self, llm, token_limit=3000): def __init__(self, llm, token_limit=3000):
self.redis = redis.Redis(host=settings.redis_cache_url, port=settings.redis_cache_port, db=settings.redis_cache_db) self.redis = redis.Redis(
host=settings.redis_cache_url,
port=settings.redis_cache_port,
db=settings.redis_cache_db,
)
self.llm = llm self.llm = llm
self.token_limit = token_limit self.token_limit = token_limit
def _convert_to_langchain(self, messages: List[dict]): def _convert_to_langchain(self, messages: list[dict]):
return [ return [
AIMessage(content=msg["content"]) if msg["is_ai"] AIMessage(content=msg["content"])
if msg["is_ai"]
else HumanMessage(content=msg["content"]) else HumanMessage(content=msg["content"])
for msg in messages for msg in messages
] ]
def _annotate_messages(self, messages: List): def _annotate_messages(self, messages: list):
# Convert to format compatible with langchain # Convert to format compatible with langchain
# Assuming messages have some way to identify if they're from AI # Assuming messages have some way to identify if they're from AI
return [ return [
{ {
**msg, **msg,
"is_ai": msg.get("user_type") == "AI" or msg.get("username") == "SOMMELIER" "is_ai": msg.get("user_type") == "AI"
or msg.get("username") == "SOMMELIER",
} }
for msg in messages for msg in messages
] ]
def _serialize_messages(self, messages: List[dict]): def _serialize_messages(self, messages: list[dict]):
return [ return [
{**msg, "created_at": msg["created_at"].isoformat()} {**msg, "created_at": msg["created_at"].isoformat()} for msg in messages
for msg in messages
] ]
def _cache_key(self, session_id: int) -> str: def _cache_key(self, session_id: int) -> str:
return f"chat_memory:{session_id}" return f"chat_memory:{session_id}"
async def load_chat_history(self, session_id: int, session: AsyncSession) -> List[HumanMessage | AIMessage]: async def load_chat_history(
self, session_id: int, session: AsyncSession
) -> list[HumanMessage | AIMessage]:
cache_key = self._cache_key(session_id) cache_key = self._cache_key(session_id)
serialized = self.redis.get(cache_key) serialized = self.redis.get(cache_key)
if serialized: if serialized:
cached_messages = json.loads(serialized) cached_messages = json.loads(serialized)
if cached_messages: if cached_messages:
last_time = datetime.fromisoformat(cached_messages[-1]["created_at"]) # last_time = datetime.fromisoformat(cached_messages[-1]["created_at"])
# TODO: Replace with actual Message model query when available # TODO: Replace with actual Message model query when available
# This would need to be implemented with SQLModel/SQLAlchemy # This would need to be implemented with SQLModel/SQLAlchemy
new_messages = [] # Placeholder for actual DB query new_messages = [] # Placeholder for actual DB query
if new_messages: if new_messages:
annotated_messages = self._annotate_messages(new_messages) annotated_messages = self._annotate_messages(new_messages)
all_messages = cached_messages + self._serialize_messages(annotated_messages) all_messages = cached_messages + self._serialize_messages(
annotated_messages
)
self.redis.setex(cache_key, 3600, json.dumps(all_messages)) self.redis.setex(cache_key, 3600, json.dumps(all_messages))
return self._convert_to_langchain(all_messages) return self._convert_to_langchain(all_messages)
@ -68,18 +74,23 @@ class ChatMemoryManager:
# TODO: Replace with actual Message model query when available # TODO: Replace with actual Message model query when available
# This would need to be implemented with SQLModel/SQLAlchemy # This would need to be implemented with SQLModel/SQLAlchemy
db_messages = [] # Placeholder for actual DB query db_messages = [] # Placeholder for actual DB query
if db_messages: if db_messages:
annotated_messages = self._annotate_messages(db_messages) annotated_messages = self._annotate_messages(db_messages)
self.redis.setex(cache_key, 3600, json.dumps(self._serialize_messages(annotated_messages))) self.redis.setex(
cache_key,
3600,
json.dumps(self._serialize_messages(annotated_messages)),
)
return self._convert_to_langchain(annotated_messages) return self._convert_to_langchain(annotated_messages)
return [] return []
async def get_session_memory(self, session_id: int, session: AsyncSession) -> ConversationSummaryBufferMemory: async def get_session_memory(
self, session_id: int, session: AsyncSession
) -> ConversationSummaryBufferMemory:
memory = ConversationSummaryBufferMemory( memory = ConversationSummaryBufferMemory(
llm=self.llm, llm=self.llm, max_token_limit=self.token_limit
max_token_limit=self.token_limit
) )
messages = await self.load_chat_history(session_id, session) messages = await self.load_chat_history(session_id, session)

View File

@ -1,23 +1,24 @@
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from rag.llm.model import ChatModel, EmbeddingsModel
from rag.database.model import VectorStoreModel
from rag.service.model import RagService
from rag.vector_store import MilvusVectorStore
from rag.settings import settings
from langchain_milvus import Milvus from langchain_milvus import Milvus
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from rag.database.model import VectorStoreModel
from rag.llm.model import ChatModel, EmbeddingsModel
from rag.service.model import RagService
from rag.settings import settings
from rag.vector_store import MilvusVectorStore
class ModelRegistry: class ModelRegistry:
"""Реестр для инициализации и получения моделей""" """Реестр для инициализации и получения моделей"""
_instance = None _instance = None
_initialized = False _initialized = False
def __new__(cls): def __new__(cls):
if cls._instance is None: if cls._instance is None:
cls._instance = super(ModelRegistry, cls).__new__(cls) cls._instance = super().__new__(cls)
return cls._instance return cls._instance
def __init__(self): def __init__(self):
if not self._initialized: if not self._initialized:
self._chat_model = None self._chat_model = None
@ -25,57 +26,56 @@ class ModelRegistry:
self._vector_store = None self._vector_store = None
self._rag_service = None self._rag_service = None
self._initialized = True self._initialized = True
def get_chat_model(self) -> ChatModel: def get_chat_model(self) -> ChatModel:
"""Получить или создать chat модель""" """Получить или создать chat модель"""
if self._chat_model is None: if self._chat_model is None:
if settings.openai_api_key: if settings.openai_api_key:
llm = ChatOpenAI( llm = ChatOpenAI(
api_key=settings.openai_api_key, api_key=settings.openai_api_key, model="gpt-4o-mini", temperature=0
model="gpt-4o-mini",
temperature=0
) )
self._chat_model = ChatModel(llm) self._chat_model = ChatModel(llm)
else: else:
raise ValueError("OpenAI API key не настроен в settings") raise ValueError("OpenAI API key не настроен в settings")
return self._chat_model return self._chat_model
def get_embeddings_model(self) -> EmbeddingsModel: def get_embeddings_model(self) -> EmbeddingsModel:
"""Получить или создать embeddings модель""" """Получить или создать embeddings модель"""
if self._embeddings_model is None: if self._embeddings_model is None:
if settings.openai_api_key: if settings.openai_api_key:
embeddings = OpenAIEmbeddings( embeddings = OpenAIEmbeddings(
api_key=settings.openai_api_key, api_key=settings.openai_api_key,
model=settings.openai_embeddings_model model=settings.openai_embeddings_model,
) )
self._embeddings_model = EmbeddingsModel(embeddings) self._embeddings_model = EmbeddingsModel(embeddings)
else: else:
raise ValueError("OpenAI API key не настроен в settings") raise ValueError("OpenAI API key не настроен в settings")
return self._embeddings_model return self._embeddings_model
def get_vector_store(self) -> MilvusVectorStore: def get_vector_store(self) -> MilvusVectorStore:
"""Получить или создать vector store""" """Получить или создать vector store"""
if self._vector_store is None: if self._vector_store is None:
embeddings_model = self.get_embeddings_model() embeddings_model = self.get_embeddings_model()
self._vector_store = MilvusVectorStore( self._vector_store = MilvusVectorStore(
embeddings_model.get_model(), embeddings_model.get_model(), collection_name=settings.milvus_collection
collection_name=settings.milvus_collection
) )
return self._vector_store return self._vector_store
def get_rag_service(self) -> RagService: def get_rag_service(self) -> RagService:
"""Получить или создать RAG сервис""" """Получить или создать RAG сервис"""
if self._rag_service is None: if self._rag_service is None:
# Создаем VectorStoreModel для совместимости с существующим кодом # Создаем VectorStoreModel для совместимости с существующим кодом
# Парсим URI для получения host и port # Парсим URI для получения host и port
uri_without_protocol = settings.milvus_uri.replace("http://", "").replace("https://", "") uri_without_protocol = settings.milvus_uri.replace("http://", "").replace(
"https://", ""
)
if ":" in uri_without_protocol: if ":" in uri_without_protocol:
host, port = uri_without_protocol.split(":", 1) host, port = uri_without_protocol.split(":", 1)
port = int(port) port = int(port)
else: else:
host = uri_without_protocol host = uri_without_protocol
port = 19530 # Default Milvus port port = 19530 # Default Milvus port
try: try:
# Попробуем использовать URI напрямую # Попробуем использовать URI напрямую
milvus_store = Milvus( milvus_store = Milvus(
@ -85,7 +85,7 @@ class ModelRegistry:
}, },
collection_name=settings.milvus_collection, collection_name=settings.milvus_collection,
) )
except Exception as e: except Exception:
# Если не сработало, попробуем host/port # Если не сработало, попробуем host/port
milvus_store = Milvus( milvus_store = Milvus(
embedding_function=self.get_embeddings_model().get_model(), embedding_function=self.get_embeddings_model().get_model(),
@ -95,15 +95,14 @@ class ModelRegistry:
}, },
collection_name=settings.milvus_collection, collection_name=settings.milvus_collection,
) )
vector_store_model = VectorStoreModel(milvus_store) vector_store_model = VectorStoreModel(milvus_store)
self._rag_service = RagService( self._rag_service = RagService(
vector_store=vector_store_model, vector_store=vector_store_model, llm=self.get_chat_model()
llm=self.get_chat_model()
) )
return self._rag_service return self._rag_service
# Singleton instance # Singleton instance
registry = ModelRegistry() registry = ModelRegistry()

View File

@ -1,3 +1,3 @@
from .model import RagService from .model import RagService
__all__ = ['RagService'] __all__ = ["RagService"]

View File

@ -1,21 +1,21 @@
from rag.database.model import VectorStoreModel
from langchain_core.runnables import RunnableWithMessageHistory
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from rag.llm.model import ChatModel
from langchain.schema import HumanMessage, SystemMessage
from langchain.chains import create_history_aware_retriever, create_retrieval_chain from langchain.chains import create_history_aware_retriever, create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.schema import HumanMessage, SystemMessage
from langchain_core.runnables import RunnableWithMessageHistory
from rag.database.model import VectorStoreModel
from rag.llm.model import ChatModel
from rag.memory import ChatMemoryManager from rag.memory import ChatMemoryManager
rag_template: str = """ rag_template: str = """
You are a beverage and alcohol expert like a sommelier, but for all kinds of alcoholic drinks, including beer, wine, spirits, cocktails, etc You are a beverage and alcohol expert like a sommelier, but for all kinds of alcoholic drinks, including beer, wine, spirits, cocktails, etc
Answer clearly and stay within your expertise in alcohol and related topics Answer clearly and stay within your expertise in alcohol and related topics
Rules: Rules:
1. Speak in first person: "I recommend", "I think" 1. Speak in first person: "I recommend", "I think"
2. Be conversational and personable - like a knowledgeable friend at a bar 2. Be conversational and personable - like a knowledgeable friend at a bar
3. Use facts from the context for specific characteristics, but speak generally when needed 3. Use facts from the context for specific characteristics, but speak generally when needed
4. Do not disclose sources or metadata from contextual documents 4. Do not disclose sources or metadata from contextual documents
5. Answer questions about alcohol and related topics (food pairings, culture, serving, etc) but politely decline unrelated subjects 5. Answer questions about alcohol and related topics (food pairings, culture, serving, etc) but politely decline unrelated subjects
6. Be brief and useful - keep answers to 2-4 sentences 6. Be brief and useful - keep answers to 2-4 sentences
7. Use chat history to maintain a natural conversation flow 7. Use chat history to maintain a natural conversation flow
@ -24,22 +24,29 @@ Rules:
Context: {context} Context: {context}
""" """
get_summary_template = """Create a concise 3-5 word title for the following conversation. get_summary_template = """Create a concise 3-5 word title for the following conversation.
Focus on the main topic. Reply only with the title.\n\n Focus on the main topic. Reply only with the title.\n\n
Chat history:\n""" Chat history:\n"""
rephrase_prompt = ChatPromptTemplate.from_messages([ rephrase_prompt = ChatPromptTemplate.from_messages(
("system", "Given a chat history and the latest user question which might reference context in the chat history, " [
"formulate a standalone question. Do NOT answer the question."), (
MessagesPlaceholder("chat_history"), "system",
("human", "{input}"), "Given a chat history and the latest user question which might reference context in the chat history, "
]) "formulate a standalone question. Do NOT answer the question.",
),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
qa_prompt = ChatPromptTemplate.from_messages([ qa_prompt = ChatPromptTemplate.from_messages(
("system", rag_template), [
MessagesPlaceholder("chat_history"), ("system", rag_template),
("human", "{input}"), MessagesPlaceholder("chat_history"),
]) ("human", "{input}"),
]
)
class RagService: class RagService:
@ -49,19 +56,25 @@ class RagService:
retriever = self.vector_store.as_retriever() retriever = self.vector_store.as_retriever()
self.rephrase_prompt = ChatPromptTemplate.from_messages([ self.rephrase_prompt = ChatPromptTemplate.from_messages(
("system", [
"Given a chat history and the latest user question which might reference context in the chat history, " (
"formulate a standalone question. Do NOT answer the question."), "system",
MessagesPlaceholder("chat_history"), "Given a chat history and the latest user question which might reference context in the chat history, "
("human", "{input}"), "formulate a standalone question. Do NOT answer the question.",
]) ),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
self.qa_prompt = ChatPromptTemplate.from_messages([ self.qa_prompt = ChatPromptTemplate.from_messages(
("system", rag_template), [
MessagesPlaceholder("chat_history"), ("system", rag_template),
("human", "{input}"), MessagesPlaceholder("chat_history"),
]) ("human", "{input}"),
]
)
self.history_aware_retriever = create_history_aware_retriever( self.history_aware_retriever = create_history_aware_retriever(
self.llm, retriever, self.rephrase_prompt self.llm, retriever, self.rephrase_prompt
@ -87,34 +100,36 @@ class RagService:
get_session_history, get_session_history,
input_messages_key="input", input_messages_key="input",
history_messages_key="chat_history", history_messages_key="chat_history",
output_messages_key="answer" output_messages_key="answer",
) )
for chunk in conversational_rag_chain.stream( for chunk in conversational_rag_chain.stream(
{"input": query}, {"input": query}, config={"configurable": {"session_id": str(session_id)}}
config={"configurable": {"session_id": str(session_id)}}
): ):
answer = chunk.get('answer', '') answer = chunk.get("answer", "")
if answer: if answer:
yield answer yield answer
def generate_title_with_llm(self, chat_history: str | list[str]) -> str: def generate_title_with_llm(self, chat_history: str | list[str]) -> str:
# Вариант 1: Если chat_history — строка # Вариант 1: Если chat_history — строка
if isinstance(chat_history, str): if isinstance(chat_history, str):
prompt = get_summary_template + chat_history prompt = get_summary_template + chat_history
messages = [ messages = [
SystemMessage(content="You are a helpful assistant that generates chat titles."), SystemMessage(
HumanMessage(content=prompt) content="You are a helpful assistant that generates chat titles."
),
HumanMessage(content=prompt),
] ]
# Вариант 2: Если chat_history — список сообщений (например, ["user: ...", "bot: ..."]) # Вариант 2: Если chat_history — список сообщений (например, ["user: ...", "bot: ..."])
else: else:
prompt = get_summary_template + "\n".join(chat_history) prompt = get_summary_template + "\n".join(chat_history)
messages = [ messages = [
SystemMessage(content="You are a helpful assistant that generates chat titles."), SystemMessage(
HumanMessage(content=prompt) content="You are a helpful assistant that generates chat titles."
),
HumanMessage(content=prompt),
] ]
response = self.llm.invoke(messages) response = self.llm.invoke(messages)

View File

@ -1,12 +1,10 @@
import os
from pydantic_settings import BaseSettings from pydantic_settings import BaseSettings
from typing import Optional
class RagSettings(BaseSettings): class RagSettings(BaseSettings):
# Database # Database
database_url: str = "postgresql+asyncpg://tdjx:1309@localhost:5432/hr_ai" database_url: str = "postgresql+asyncpg://tdjx:1309@localhost:5432/hr_ai"
# Milvus Settings # Milvus Settings
milvus_uri: str = "http://5.188.159.90:19530" milvus_uri: str = "http://5.188.159.90:19530"
milvus_collection: str = "candidate_profiles" milvus_collection: str = "candidate_profiles"
@ -15,31 +13,31 @@ class RagSettings(BaseSettings):
redis_cache_url: str = "localhost" redis_cache_url: str = "localhost"
redis_cache_port: int = 6379 redis_cache_port: int = 6379
redis_cache_db: int = 0 redis_cache_db: int = 0
# S3 Configuration # S3 Configuration
s3_endpoint_url: str s3_endpoint_url: str
s3_access_key_id: str s3_access_key_id: str
s3_secret_access_key: str s3_secret_access_key: str
s3_bucket_name: str s3_bucket_name: str
s3_region: str = "ru-1" s3_region: str = "ru-1"
# LLM Settings # LLM Settings
openai_api_key: Optional[str] = None openai_api_key: str | None = None
anthropic_api_key: Optional[str] = None anthropic_api_key: str | None = None
openai_model: str = "gpt-4o-mini" openai_model: str = "gpt-4o-mini"
openai_embeddings_model: str = "text-embedding-3-small" openai_embeddings_model: str = "text-embedding-3-small"
# AI Agent Settings # AI Agent Settings
deepgram_api_key: Optional[str] = None deepgram_api_key: str | None = None
cartesia_api_key: Optional[str] = None cartesia_api_key: str | None = None
elevenlabs_api_key: Optional[str] = None elevenlabs_api_key: str | None = None
resemble_api_key: Optional[str] = None resemble_api_key: str | None = None
# LiveKit Configuration # LiveKit Configuration
livekit_url: str = "ws://localhost:7880" livekit_url: str = "ws://localhost:7880"
livekit_api_key: str = "devkey" livekit_api_key: str = "devkey"
livekit_api_secret: str = "devkey_secret_32chars_minimum_length" livekit_api_secret: str = "devkey_secret_32chars_minimum_length"
# App Configuration # App Configuration
app_env: str = "development" app_env: str = "development"
debug: bool = True debug: bool = True
@ -49,4 +47,4 @@ class RagSettings(BaseSettings):
env_file_encoding = "utf-8" env_file_encoding = "utf-8"
settings = RagSettings() settings = RagSettings()

View File

@ -1,11 +1,15 @@
from typing import List, Dict, Any from typing import Any
from langchain_milvus import Milvus
from langchain_core.embeddings import Embeddings from langchain_core.embeddings import Embeddings
from langchain_milvus import Milvus
from rag.settings import settings from rag.settings import settings
class MilvusVectorStore: class MilvusVectorStore:
def __init__(self, embeddings_model: Embeddings, collection_name: str = "candidate_profiles"): def __init__(
self, embeddings_model: Embeddings, collection_name: str = "candidate_profiles"
):
self.embeddings = embeddings_model self.embeddings = embeddings_model
self.collection_name = collection_name self.collection_name = collection_name
@ -18,18 +22,22 @@ class MilvusVectorStore:
collection_name=collection_name, collection_name=collection_name,
) )
def add_candidate_profile(self, candidate_id: str, resume_data: Dict[str, Any]): def add_candidate_profile(self, candidate_id: str, resume_data: dict[str, Any]):
"""Добавляет профиль кандидата в векторную базу""" """Добавляет профиль кандидата в векторную базу"""
try: try:
# Создаем текст для векторизации из навыков и опыта # Создаем текст для векторизации из навыков и опыта
skills_text = " ".join(resume_data.get("skills", [])) skills_text = " ".join(resume_data.get("skills", []))
experience_text = " ".join([ experience_text = " ".join(
f"{exp.get('position', '')} {exp.get('company', '')} {exp.get('description', '')}" [
for exp in resume_data.get("experience", []) f"{exp.get('position', '')} {exp.get('company', '')} {exp.get('description', '')}"
]) for exp in resume_data.get("experience", [])
]
combined_text = f"{skills_text} {experience_text} {resume_data.get('summary', '')}" )
combined_text = (
f"{skills_text} {experience_text} {resume_data.get('summary', '')}"
)
# Метаданные для поиска # Метаданные для поиска
metadata = { metadata = {
"candidate_id": candidate_id, "candidate_id": candidate_id,
@ -38,60 +46,53 @@ class MilvusVectorStore:
"phone": resume_data.get("phone", ""), "phone": resume_data.get("phone", ""),
"total_years": resume_data.get("total_years", 0), "total_years": resume_data.get("total_years", 0),
"skills": resume_data.get("skills", []), "skills": resume_data.get("skills", []),
"education": resume_data.get("education", "") "education": resume_data.get("education", ""),
} }
# Добавляем в векторную базу # Добавляем в векторную базу
self.vector_store.add_texts( self.vector_store.add_texts(
texts=[combined_text], texts=[combined_text], metadatas=[metadata], ids=[candidate_id]
metadatas=[metadata],
ids=[candidate_id]
) )
return True
except Exception as e:
raise Exception(f"Ошибка при добавлении кандидата в Milvus: {str(e)}")
def search_similar_candidates(self, query: str, k: int = 5) -> List[Dict[str, Any]]: return True
except Exception as e:
raise Exception(f"Ошибка при добавлении кандидата в Milvus: {str(e)}") from e
def search_similar_candidates(self, query: str, k: int = 5) -> list[dict[str, Any]]:
"""Поиск похожих кандидатов по запросу""" """Поиск похожих кандидатов по запросу"""
try: try:
results = self.vector_store.similarity_search_with_score(query, k=k) results = self.vector_store.similarity_search_with_score(query, k=k)
candidates = [] candidates = []
for doc, score in results: for doc, score in results:
candidate = { candidate = {
"content": doc.page_content, "content": doc.page_content,
"metadata": doc.metadata, "metadata": doc.metadata,
"similarity_score": score "similarity_score": score,
} }
candidates.append(candidate) candidates.append(candidate)
return candidates
except Exception as e:
raise Exception(f"Ошибка при поиске кандидатов в Milvus: {str(e)}")
def get_candidate_by_id(self, candidate_id: str) -> Dict[str, Any]: return candidates
except Exception as e:
raise Exception(f"Ошибка при поиске кандидатов в Milvus: {str(e)}") from e
def get_candidate_by_id(self, candidate_id: str) -> dict[str, Any]:
"""Получает кандидата по ID""" """Получает кандидата по ID"""
try: try:
results = self.vector_store.similarity_search( results = self.vector_store.similarity_search(
query="", query="", k=1, expr=f"candidate_id == '{candidate_id}'"
k=1,
expr=f"candidate_id == '{candidate_id}'"
) )
if results: if results:
doc = results[0] doc = results[0]
return { return {"content": doc.page_content, "metadata": doc.metadata}
"content": doc.page_content,
"metadata": doc.metadata
}
else: else:
return None return None
except Exception as e: except Exception as e:
raise Exception(f"Ошибка при получении кандидата из Milvus: {str(e)}") raise Exception(f"Ошибка при получении кандидата из Milvus: {str(e)}") from e
def delete_candidate(self, candidate_id: str): def delete_candidate(self, candidate_id: str):
"""Удаляет кандидата из векторной базы""" """Удаляет кандидата из векторной базы"""
@ -99,6 +100,6 @@ class MilvusVectorStore:
# В Milvus удаление по ID # В Milvus удаление по ID
self.vector_store.delete([candidate_id]) self.vector_store.delete([candidate_id])
return True return True
except Exception as e: except Exception as e:
raise Exception(f"Ошибка при удалении кандидата из Milvus: {str(e)}") raise Exception(f"Ошибка при удалении кандидата из Milvus: {str(e)}") from e

View File

@ -8,23 +8,25 @@ from pathlib import Path
# Add root directory to PYTHONPATH # Add root directory to PYTHONPATH
sys.path.append(str(Path(__file__).parent)) sys.path.append(str(Path(__file__).parent))
async def test_database(): async def test_database():
"""Test PostgreSQL connection""" """Test PostgreSQL connection"""
print("Testing database connection...") print("Testing database connection...")
try: try:
from sqlalchemy import select
from app.core.database import get_session as get_db from app.core.database import get_session as get_db
from app.models.resume import Resume from app.models.resume import Resume
from sqlalchemy import select
async for db in get_db(): async for db in get_db():
result = await db.execute(select(Resume).limit(1)) result = await db.execute(select(Resume).limit(1))
resumes = result.scalars().all() resumes = result.scalars().all()
print("PASS - Database connection successful") print("PASS - Database connection successful")
print(f"Found resumes: {len(resumes)}") print(f"Found resumes: {len(resumes)}")
return True return True
except Exception as e: except Exception as e:
print(f"FAIL - Database error: {str(e)}") print(f"FAIL - Database error: {str(e)}")
return False return False
@ -33,14 +35,14 @@ async def test_database():
async def test_rag(): async def test_rag():
"""Test RAG system""" """Test RAG system"""
print("\nTesting RAG system...") print("\nTesting RAG system...")
try: try:
from rag.registry import registry
from rag.llm.model import ResumeParser from rag.llm.model import ResumeParser
from rag.registry import registry
chat_model = registry.get_chat_model() chat_model = registry.get_chat_model()
parser = ResumeParser(chat_model) parser = ResumeParser(chat_model)
# Test resume text # Test resume text
test_text = """ test_text = """
John Doe John Doe
@ -49,14 +51,14 @@ async def test_rag():
Skills: Python, Django, PostgreSQL Skills: Python, Django, PostgreSQL
Education: Computer Science Education: Computer Science
""" """
parsed_resume = parser.parse_resume_text(test_text) parsed_resume = parser.parse_resume_text(test_text)
print("PASS - RAG system working") print("PASS - RAG system working")
print(f"Parsed data keys: {list(parsed_resume.keys())}") print(f"Parsed data keys: {list(parsed_resume.keys())}")
return True return True
except Exception as e: except Exception as e:
print(f"FAIL - RAG error: {str(e)}") print(f"FAIL - RAG error: {str(e)}")
return False return False
@ -65,21 +67,22 @@ async def test_rag():
def test_redis(): def test_redis():
"""Test Redis connection""" """Test Redis connection"""
print("\nTesting Redis connection...") print("\nTesting Redis connection...")
try: try:
import redis import redis
from rag.settings import settings from rag.settings import settings
r = redis.Redis( r = redis.Redis(
host=settings.redis_cache_url, host=settings.redis_cache_url,
port=settings.redis_cache_port, port=settings.redis_cache_port,
db=settings.redis_cache_db db=settings.redis_cache_db,
) )
r.ping() r.ping()
print("PASS - Redis connection successful") print("PASS - Redis connection successful")
return True return True
except Exception as e: except Exception as e:
print(f"FAIL - Redis error: {str(e)}") print(f"FAIL - Redis error: {str(e)}")
print("TIP: Start Redis with: docker run -d -p 6379:6379 redis:alpine") print("TIP: Start Redis with: docker run -d -p 6379:6379 redis:alpine")
@ -89,24 +92,24 @@ def test_redis():
async def test_interview_service(): async def test_interview_service():
"""Test interview service""" """Test interview service"""
print("\nTesting interview service...") print("\nTesting interview service...")
try: try:
from app.services.interview_service import InterviewRoomService
from app.core.database import get_session as get_db from app.core.database import get_session as get_db
from app.services.interview_service import InterviewRoomService
async for db in get_db(): async for db in get_db():
service = InterviewRoomService(db) service = InterviewRoomService(db)
# Test token generation # Test token generation
token = service.generate_access_token("test_room", "test_user") token = service.generate_access_token("test_room", "test_user")
print(f"PASS - Token generated (length: {len(token)})") print(f"PASS - Token generated (length: {len(token)})")
# Test fallback plan # Test fallback plan
plan = service._get_fallback_interview_plan() plan = service._get_fallback_interview_plan()
print(f"PASS - Interview plan structure: {list(plan.keys())}") print(f"PASS - Interview plan structure: {list(plan.keys())}")
return True return True
except Exception as e: except Exception as e:
print(f"FAIL - Interview service error: {str(e)}") print(f"FAIL - Interview service error: {str(e)}")
return False return False
@ -115,10 +118,10 @@ async def test_interview_service():
def test_ai_agent(): def test_ai_agent():
"""Test AI agent""" """Test AI agent"""
print("\nTesting AI agent...") print("\nTesting AI agent...")
try: try:
from ai_interviewer_agent import InterviewAgent from ai_interviewer_agent import InterviewAgent
test_plan = { test_plan = {
"interview_structure": { "interview_structure": {
"duration_minutes": 15, "duration_minutes": 15,
@ -127,24 +130,24 @@ def test_ai_agent():
{ {
"name": "Introduction", "name": "Introduction",
"duration_minutes": 5, "duration_minutes": 5,
"questions": ["Tell me about yourself"] "questions": ["Tell me about yourself"],
} }
] ],
}, },
"candidate_info": { "candidate_info": {
"name": "Test Candidate", "name": "Test Candidate",
"skills": ["Python"], "skills": ["Python"],
"total_years": 2 "total_years": 2,
} },
} }
agent = InterviewAgent(test_plan) agent = InterviewAgent(test_plan)
print(f"PASS - AI Agent initialized with {len(agent.sections)} sections") print(f"PASS - AI Agent initialized with {len(agent.sections)} sections")
print(f"Current section: {agent.get_current_section().get('name')}") print(f"Current section: {agent.get_current_section().get('name')}")
return True return True
except Exception as e: except Exception as e:
print(f"FAIL - AI Agent error: {str(e)}") print(f"FAIL - AI Agent error: {str(e)}")
return False return False
@ -155,7 +158,7 @@ async def main():
print("=" * 50) print("=" * 50)
print("HR-AI SYSTEM TEST") print("HR-AI SYSTEM TEST")
print("=" * 50) print("=" * 50)
tests = [ tests = [
("Database", test_database), ("Database", test_database),
("RAG System", test_rag), ("RAG System", test_rag),
@ -163,9 +166,9 @@ async def main():
("Interview Service", test_interview_service), ("Interview Service", test_interview_service),
("AI Agent", lambda: test_ai_agent()), ("AI Agent", lambda: test_ai_agent()),
] ]
results = [] results = []
for test_name, test_func in tests: for test_name, test_func in tests:
try: try:
if asyncio.iscoroutinefunction(test_func): if asyncio.iscoroutinefunction(test_func):
@ -176,27 +179,29 @@ async def main():
except Exception as e: except Exception as e:
print(f"CRITICAL ERROR in {test_name}: {str(e)}") print(f"CRITICAL ERROR in {test_name}: {str(e)}")
results.append((test_name, False)) results.append((test_name, False))
# Summary # Summary
print("\n" + "=" * 50) print("\n" + "=" * 50)
print("TEST RESULTS") print("TEST RESULTS")
print("=" * 50) print("=" * 50)
passed = 0 passed = 0
for test_name, result in results: for test_name, result in results:
status = "PASS" if result else "FAIL" status = "PASS" if result else "FAIL"
print(f"{test_name:20} {status}") print(f"{test_name:20} {status}")
if result: if result:
passed += 1 passed += 1
total = len(results) total = len(results)
print(f"\nRESULT: {passed}/{total} tests passed") print(f"\nRESULT: {passed}/{total} tests passed")
if passed == total: if passed == total:
print("\nSYSTEM READY FOR TESTING!") print("\nSYSTEM READY FOR TESTING!")
print("Next steps:") print("Next steps:")
print("1. Start FastAPI: uvicorn app.main:app --reload") print("1. Start FastAPI: uvicorn app.main:app --reload")
print("2. Start Celery: celery -A celery_worker.celery_app worker --loglevel=info") print(
"2. Start Celery: celery -A celery_worker.celery_app worker --loglevel=info"
)
print("3. Upload test resume via /resume/upload") print("3. Upload test resume via /resume/upload")
print("4. Check interview plan generation") print("4. Check interview plan generation")
else: else:
@ -205,4 +210,4 @@ async def main():
if __name__ == "__main__": if __name__ == "__main__":
asyncio.run(main()) asyncio.run(main())

View File

@ -5,32 +5,34 @@
import asyncio import asyncio
import sys import sys
import os
import requests
from pathlib import Path from pathlib import Path
import requests
# Добавляем корневую директорию в PYTHONPATH # Добавляем корневую директорию в PYTHONPATH
sys.path.append(str(Path(__file__).parent)) sys.path.append(str(Path(__file__).parent))
async def test_database_connection(): async def test_database_connection():
"""Тест подключения к PostgreSQL""" """Тест подключения к PostgreSQL"""
print("Testing database connection...") print("Testing database connection...")
try: try:
from sqlalchemy import select
from app.core.database import get_db from app.core.database import get_db
from app.models.resume import Resume from app.models.resume import Resume
from sqlalchemy import select
# Получаем async сессию # Получаем async сессию
async for db in get_db(): async for db in get_db():
# Пробуем выполнить простой запрос # Пробуем выполнить простой запрос
result = await db.execute(select(Resume).limit(1)) result = await db.execute(select(Resume).limit(1))
resumes = result.scalars().all() resumes = result.scalars().all()
print("OK Database: connection successful") print("OK Database: connection successful")
print(f"Found resumes in database: {len(resumes)}") print(f"Found resumes in database: {len(resumes)}")
return True return True
except Exception as e: except Exception as e:
print(f"FAIL Database: connection error - {str(e)}") print(f"FAIL Database: connection error - {str(e)}")
return False return False
@ -39,20 +41,20 @@ async def test_database_connection():
async def test_rag_system(): async def test_rag_system():
"""Тест RAG системы (парсинг резюме)""" """Тест RAG системы (парсинг резюме)"""
print("\n🔍 Тестируем RAG систему...") print("\n🔍 Тестируем RAG систему...")
try: try:
from rag.registry import registry
from rag.llm.model import ResumeParser from rag.llm.model import ResumeParser
from rag.registry import registry
# Инициализируем модели # Инициализируем модели
chat_model = registry.get_chat_model() chat_model = registry.get_chat_model()
embeddings_model = registry.get_embeddings_model() # embeddings_model = registry.get_embeddings_model()
print("✅ RAG система: модели инициализированы") print("✅ RAG система: модели инициализированы")
# Тестируем парсер резюме # Тестируем парсер резюме
parser = ResumeParser(chat_model) parser = ResumeParser(chat_model)
# Создаем тестовый текст резюме # Создаем тестовый текст резюме
test_resume_text = """ test_resume_text = """
Иван Петров Иван Петров
@ -61,14 +63,14 @@ async def test_rag_system():
Навыки: Python, Django, PostgreSQL, Docker Навыки: Python, Django, PostgreSQL, Docker
Образование: МГУ, факультет ВМК Образование: МГУ, факультет ВМК
""" """
parsed_resume = parser.parse_resume_text(test_resume_text) parsed_resume = parser.parse_resume_text(test_resume_text)
print("✅ RAG система: парсинг резюме работает") print("✅ RAG система: парсинг резюме работает")
print(f"📋 Распарсенные данные: {parsed_resume}") print(f"📋 Распарсенные данные: {parsed_resume}")
return True return True
except Exception as e: except Exception as e:
print(f"❌ RAG система: ошибка - {str(e)}") print(f"❌ RAG система: ошибка - {str(e)}")
return False return False
@ -77,41 +79,44 @@ async def test_rag_system():
def test_redis_connection(): def test_redis_connection():
"""Тест подключения к Redis""" """Тест подключения к Redis"""
print("\n🔍 Тестируем подключение к Redis...") print("\n🔍 Тестируем подключение к Redis...")
try: try:
import redis import redis
from rag.settings import settings from rag.settings import settings
r = redis.Redis( r = redis.Redis(
host=settings.redis_cache_url, host=settings.redis_cache_url,
port=settings.redis_cache_port, port=settings.redis_cache_port,
db=settings.redis_cache_db db=settings.redis_cache_db,
) )
# Пробуем ping # Пробуем ping
r.ping() r.ping()
print("✅ Redis: подключение успешно") print("✅ Redis: подключение успешно")
return True return True
except Exception as e: except Exception as e:
print(f"❌ Redis: ошибка подключения - {str(e)}") print(f"❌ Redis: ошибка подключения - {str(e)}")
print("💡 Для запуска Redis используйте: docker run -d -p 6379:6379 redis:alpine") print(
"💡 Для запуска Redis используйте: docker run -d -p 6379:6379 redis:alpine"
)
return False return False
async def test_celery_tasks(): async def test_celery_tasks():
"""Тест Celery задач""" """Тест Celery задач"""
print("\n🔍 Тестируем Celery задачи...") print("\n🔍 Тестируем Celery задачи...")
try: try:
from celery_worker.tasks import parse_resume_task
print("✅ Celery: задачи импортируются") print("✅ Celery: задачи импортируются")
print("💡 Для полного теста запустите: celery -A celery_worker.celery_app worker --loglevel=info") print(
"💡 Для полного теста запустите: celery -A celery_worker.celery_app worker --loglevel=info"
)
return True return True
except Exception as e: except Exception as e:
print(f"❌ Celery: ошибка - {str(e)}") print(f"❌ Celery: ошибка - {str(e)}")
return False return False
@ -120,14 +125,14 @@ async def test_celery_tasks():
async def test_interview_service(): async def test_interview_service():
"""Тест сервиса интервью (без LiveKit)""" """Тест сервиса интервью (без LiveKit)"""
print("\n🔍 Тестируем сервис интервью...") print("\n🔍 Тестируем сервис интервью...")
try: try:
from app.services.interview_service import InterviewRoomService
from app.core.database import get_db from app.core.database import get_db
from app.services.interview_service import InterviewRoomService
async for db in get_db(): async for db in get_db():
service = InterviewRoomService(db) service = InterviewRoomService(db)
# Тестируем генерацию токена (должен работать даже без LiveKit сервера) # Тестируем генерацию токена (должен работать даже без LiveKit сервера)
try: try:
token = service.generate_access_token("test_room", "test_user") token = service.generate_access_token("test_room", "test_user")
@ -135,14 +140,14 @@ async def test_interview_service():
print(f"🎫 Тестовый токен сгенерирован (длина: {len(token)})") print(f"🎫 Тестовый токен сгенерирован (длина: {len(token)})")
except Exception as e: except Exception as e:
print(f"⚠️ Interview Service: ошибка токена - {str(e)}") print(f"⚠️ Interview Service: ошибка токена - {str(e)}")
# Тестируем fallback план интервью # Тестируем fallback план интервью
fallback_plan = service._get_fallback_interview_plan() fallback_plan = service._get_fallback_interview_plan()
print("✅ Interview Service: fallback план работает") print("✅ Interview Service: fallback план работает")
print(f"📋 Структура плана: {list(fallback_plan.keys())}") print(f"📋 Структура плана: {list(fallback_plan.keys())}")
return True return True
except Exception as e: except Exception as e:
print(f"❌ Interview Service: ошибка - {str(e)}") print(f"❌ Interview Service: ошибка - {str(e)}")
return False return False
@ -151,10 +156,10 @@ async def test_interview_service():
def test_ai_agent_import(): def test_ai_agent_import():
"""Тест импорта AI агента""" """Тест импорта AI агента"""
print("\n🔍 Тестируем AI агента...") print("\n🔍 Тестируем AI агента...")
try: try:
from ai_interviewer_agent import InterviewAgent from ai_interviewer_agent import InterviewAgent
# Тестовый план интервью # Тестовый план интервью
test_plan = { test_plan = {
"interview_structure": { "interview_structure": {
@ -164,34 +169,34 @@ def test_ai_agent_import():
{ {
"name": "Знакомство", "name": "Знакомство",
"duration_minutes": 5, "duration_minutes": 5,
"questions": ["Расскажи о себе"] "questions": ["Расскажи о себе"],
}, },
{ {
"name": "Опыт", "name": "Опыт",
"duration_minutes": 10, "duration_minutes": 10,
"questions": ["Какой у тебя опыт?"] "questions": ["Какой у тебя опыт?"],
} },
] ],
}, },
"candidate_info": { "candidate_info": {
"name": "Тестовый кандидат", "name": "Тестовый кандидат",
"skills": ["Python"], "skills": ["Python"],
"total_years": 2 "total_years": 2,
} },
} }
agent = InterviewAgent(test_plan) agent = InterviewAgent(test_plan)
print("✅ AI Agent: импорт и инициализация работают") print("✅ AI Agent: импорт и инициализация работают")
print(f"📊 Секций в плане: {len(agent.sections)}") print(f"📊 Секций в плане: {len(agent.sections)}")
print(f"🎯 Текущая секция: {agent.get_current_section().get('name')}") print(f"🎯 Текущая секция: {agent.get_current_section().get('name')}")
# Тестируем извлечение системных инструкций # Тестируем извлечение системных инструкций
instructions = agent.get_system_instructions() instructions = agent.get_system_instructions()
print(f"📝 Инструкции сгенерированы (длина: {len(instructions)})") print(f"📝 Инструкции сгенерированы (длина: {len(instructions)})")
return True return True
except Exception as e: except Exception as e:
print(f"❌ AI Agent: ошибка - {str(e)}") print(f"❌ AI Agent: ошибка - {str(e)}")
return False return False
@ -200,10 +205,11 @@ def test_ai_agent_import():
def check_external_services(): def check_external_services():
"""Проверка внешних сервисов""" """Проверка внешних сервисов"""
print("\n🔍 Проверяем внешние сервисы...") print("\n🔍 Проверяем внешние сервисы...")
# Проверяем Milvus # Проверяем Milvus
try: try:
from rag.settings import settings from rag.settings import settings
response = requests.get(f"{settings.milvus_uri}/health", timeout=5) response = requests.get(f"{settings.milvus_uri}/health", timeout=5)
if response.status_code == 200: if response.status_code == 200:
print("✅ Milvus: сервер доступен") print("✅ Milvus: сервер доступен")
@ -211,23 +217,27 @@ def check_external_services():
print("⚠️ Milvus: сервер недоступен") print("⚠️ Milvus: сервер недоступен")
except Exception: except Exception:
print("❌ Milvus: сервер недоступен") print("❌ Milvus: сервер недоступен")
# Проверяем LiveKit (если запущен) # Проверяем LiveKit (если запущен)
try: try:
# LiveKit health check обычно на HTTP порту # LiveKit health check обычно на HTTP порту
livekit_http_url = settings.livekit_url.replace("ws://", "http://").replace(":7880", ":7880") livekit_http_url = settings.livekit_url.replace("ws://", "http://").replace(
":7880", ":7880"
)
response = requests.get(livekit_http_url, timeout=2) response = requests.get(livekit_http_url, timeout=2)
print("✅ LiveKit: сервер запущен") print("✅ LiveKit: сервер запущен")
except Exception: except Exception:
print("❌ LiveKit: сервер не запущен") print("❌ LiveKit: сервер не запущен")
print("💡 Для запуска LiveKit используйте Docker: docker run --rm -p 7880:7880 -p 7881:7881 livekit/livekit-server --dev") print(
"💡 Для запуска LiveKit используйте Docker: docker run --rm -p 7880:7880 -p 7881:7881 livekit/livekit-server --dev"
)
async def run_all_tests(): async def run_all_tests():
"""Запуск всех тестов""" """Запуск всех тестов"""
print("=== HR-AI System Testing ===") print("=== HR-AI System Testing ===")
print("=" * 50) print("=" * 50)
tests = [ tests = [
("Database", test_database_connection), ("Database", test_database_connection),
("RAG System", test_rag_system), ("RAG System", test_rag_system),
@ -236,9 +246,9 @@ async def run_all_tests():
("Interview Service", test_interview_service), ("Interview Service", test_interview_service),
("AI Agent", lambda: test_ai_agent_import()), ("AI Agent", lambda: test_ai_agent_import()),
] ]
results = {} results = {}
for test_name, test_func in tests: for test_name, test_func in tests:
try: try:
if asyncio.iscoroutinefunction(test_func): if asyncio.iscoroutinefunction(test_func):
@ -249,24 +259,24 @@ async def run_all_tests():
except Exception as e: except Exception as e:
print(f"{test_name}: критическая ошибка - {str(e)}") print(f"{test_name}: критическая ошибка - {str(e)}")
results[test_name] = False results[test_name] = False
# Проверяем внешние сервисы # Проверяем внешние сервисы
check_external_services() check_external_services()
# Итоговый отчет # Итоговый отчет
print("\n" + "=" * 50) print("\n" + "=" * 50)
print("📊 ИТОГОВЫЙ ОТЧЕТ") print("📊 ИТОГОВЫЙ ОТЧЕТ")
print("=" * 50) print("=" * 50)
passed = sum(1 for r in results.values() if r) passed = sum(1 for r in results.values() if r)
total = len(results) total = len(results)
for test_name, result in results.items(): for test_name, result in results.items():
status = "✅ PASS" if result else "❌ FAIL" status = "✅ PASS" if result else "❌ FAIL"
print(f"{test_name:20} {status}") print(f"{test_name:20} {status}")
print(f"\n🎯 Результат: {passed}/{total} тестов прошли успешно") print(f"\n🎯 Результат: {passed}/{total} тестов прошли успешно")
if passed == total: if passed == total:
print("🎉 Система готова к тестированию!") print("🎉 Система готова к тестированию!")
print_next_steps() print_next_steps()
@ -279,7 +289,9 @@ def print_next_steps():
"""Следующие шаги для полного тестирования""" """Следующие шаги для полного тестирования"""
print("\n📋 СЛЕДУЮЩИЕ ШАГИ:") print("\n📋 СЛЕДУЮЩИЕ ШАГИ:")
print("1. Запустите FastAPI сервер: uvicorn app.main:app --reload") print("1. Запустите FastAPI сервер: uvicorn app.main:app --reload")
print("2. Запустите Celery worker: celery -A celery_worker.celery_app worker --loglevel=info") print(
"2. Запустите Celery worker: celery -A celery_worker.celery_app worker --loglevel=info"
)
print("3. Загрузите тестовое резюме через /resume/upload") print("3. Загрузите тестовое резюме через /resume/upload")
print("4. Проверьте генерацию плана интервью в базе данных") print("4. Проверьте генерацию плана интервью в базе данных")
print("5. Для полного теста голосовых интервью потребуются:") print("5. Для полного теста голосовых интервью потребуются:")
@ -291,10 +303,10 @@ def print_troubleshooting():
"""Устранение неисправностей""" """Устранение неисправностей"""
print("\n🔧 УСТРАНЕНИЕ ПРОБЛЕМ:") print("\n🔧 УСТРАНЕНИЕ ПРОБЛЕМ:")
print("• Redis не запущен: docker run -d -p 6379:6379 redis:alpine") print("• Redis не запущен: docker run -d -p 6379:6379 redis:alpine")
print("• Milvus недоступен: проверьте настройки MILVUS_URI") print("• Milvus недоступен: проверьте настройки MILVUS_URI")
print("• RAG ошибки: проверьте OPENAI_API_KEY") print("• RAG ошибки: проверьте OPENAI_API_KEY")
print("• База данных: проверьте DATABASE_URL и запустите alembic upgrade head") print("• База данных: проверьте DATABASE_URL и запустите alembic upgrade head")
if __name__ == "__main__": if __name__ == "__main__":
asyncio.run(run_all_tests()) asyncio.run(run_all_tests())

3988
uv.lock

File diff suppressed because it is too large Load Diff