Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
name: CI

on:
push:
pull_request:

jobs:
backend:
runs-on: ubuntu-latest
defaults:
run:
working-directory: backend
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install backend dependencies
run: |
python -m pip install --upgrade pip
pip install -e .[dev]
- name: Run backend tests
run: pytest -q

frontend:
runs-on: ubuntu-latest
defaults:
run:
working-directory: frontend
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install frontend dependencies
run: npm install
- name: Run frontend tests
run: npm test
- name: Build frontend
run: npm run build
142 changes: 142 additions & 0 deletions HOWTO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# HOWTO: Run and Operate Career Radar MVP

## 1) Prerequisites
- Docker + Docker Compose (recommended)
- Or local runtimes:
- Python 3.12+
- Node.js 18+
- PostgreSQL 16+

## 2) Environment variables and configuration
Backend settings are defined in `backend/app/core/config.py` and loaded from `.env`.

Common values:
- `DATABASE_URL` (default: `postgresql+psycopg://postgres:postgres@db:5432/career_radar`)
- `USE_LLM_STRATEGY` (default `false`; MVP runs deterministic mode)
- `API_PREFIX` (default `/api/v1`)
- `CORS_ORIGINS` (default `*`)

Frontend uses:
- `NEXT_PUBLIC_API_URL` (default `http://localhost:8000/api/v1`)

When using Docker Compose, these are already wired in `docker-compose.yml`.

## 3) Start with Docker Compose (recommended)
```bash
make up
```
This starts:
- `db` (Postgres)
- `backend` (FastAPI)
- `frontend` (Next.js)

URLs:
- Frontend: http://localhost:3000
- Backend OpenAPI docs: http://localhost:8000/docs

Stop:
```bash
make down
```

## 4) Run backend/frontend locally without Docker
### Backend
```bash
cd backend
pip install -e .[dev]
alembic upgrade head
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
```

### Frontend
```bash
cd frontend
npm install
NEXT_PUBLIC_API_URL=http://localhost:8000/api/v1 npm run dev
```

## 5) Migrations and seed data
- Apply migrations:
```bash
make migrate
```
or `cd backend && alembic upgrade head`.

- Seed data runs on backend startup in `app.main.startup_event()` via `app.db.seed.seed_data`.

## 6) Scheduler behavior and cadence
Configured in `backend/app/jobs/scheduler.py`:
- ingest: every 30m
- rescore: every 20m
- strategy: every 60m
- stale check/signals: every 6h
- company intelligence: every 45m
- decision engine refresh: every 30m

`job_runs` captures status, count, and summary for observability.

## 7) Connector configuration and manual trigger
Opportunity connector ingest (mock connectors in MVP):
- `POST /api/v1/ingest/connectors`

CSV ingest:
- `POST /api/v1/ingest/csv`

Admin job trigger endpoint:
- `POST /api/v1/admin/jobs/{job_name}`
- supported: `ingest`, `rescore`, `strategy`, `stale`, `company_intelligence`, `decision_engine`

## 8) Company intelligence trigger path
- Ingest endpoint: `POST /api/v1/ingest/company-intelligence`
- List signals: `GET /api/v1/company-signals`
- Company-specific signals: `GET /api/v1/companies/{company_id}/signals`

## 9) Recommendation / decision engine behavior (high level)
- Refresh endpoint: `POST /api/v1/recommendations/refresh`
- List/filter endpoint: `GET /api/v1/recommendations?status=open&urgency=high`
- Detail endpoint: `GET /api/v1/recommendations/{id}`

Decision inputs are deterministic:
- opportunity score
- opportunity signals (severity + recency)
- company intelligence signals
- network node quality
- staleness/timing

Output categories:
- `OPPORTUNITY_PRIORITY`
- `SIGNAL_ALERT`
- `NETWORK_ACTION`
- `FOLLOW_UP_ACTION`
- `WATCHLIST_ESCALATION`

Closed/archived/rejected opportunities are excluded from active recommendation generation.

## 10) Where to inspect logs and job runs
- Runtime logs:
```bash
make logs
```
- Job runs API:
- `GET /api/v1/admin/jobs/runs`
- Admin UI:
- `/admin`

## 11) Basic troubleshooting
- API unavailable from frontend:
- verify `NEXT_PUBLIC_API_URL`
- verify backend is reachable at `http://localhost:8000`
- DB connection failures:
- check `DATABASE_URL`
- ensure Postgres is running and migration head is applied
- No recommendations shown:
- trigger `POST /api/v1/recommendations/refresh`
- check `GET /api/v1/recommendations?status=open`
- No realtime updates:
- check `GET /api/v1/events/stream` and browser network tab

## 12) Known MVP limitations
- Auth/multi-user tenancy is not implemented.
- Connectors are intentionally minimal (MVP mock + RSS parsing path).
- Strategy path is deterministic by default.
- UI is functional and compact, not fully polished.
89 changes: 27 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,60 +1,30 @@
# Career Intelligence Agent + Career Radar Dashboard (MVP+)

## What this build improves
This iteration hardens the original MVP with stronger explainable scoring, an actionable signal layer, scheduler observability, and more reliable SSE UX while preserving the same stack and local workflow.

## Architecture
- **Backend**: FastAPI + SQLAlchemy + Alembic + APScheduler (`/backend`)
- **Frontend**: Next.js + TypeScript + Tailwind (`/frontend`)
- **DB**: PostgreSQL via Docker Compose
- **Realtime**: SSE (`/api/v1/events/stream`) with event ids + heartbeat + frontend reconnect state.
- **Adapters**: External job sources use connector adapters; strategy engine defaults deterministic and can switch via feature flag.

## Core capabilities
- Structured profile engine with seeded senior ops/governance/security persona.
- Opportunity ingest (manual, CSV, mock connectors), CRUD, status updates.
- Explainable scoring engine with weighted factors:
- profile alignment
- compensation fit
- geography fit
- leadership/seniority fit
- industry fit
- strategic value
- ease of absorption
- Signal layer for opportunities/companies (`new_role_posted`, `comp_below_threshold`, `stale_opportunity`, `high_strategic_visibility`).
- Network intelligence + top entry-point recommendations.
- Weekly micro-actions + monthly strategy reviews.
- Background automation (ingest/rescore/strategy/stale checks).
- Job observability via persisted `job_runs` history and admin UI.

## Scheduler behavior
APScheduler starts on backend startup and runs:
- ingest: every 30m
- rescore: every 20m
- strategy: every 60m
- stale check/signals: every 6h

Each run writes `job_runs` with status, processed count, timestamps, and summary. You can inspect via:
- API: `GET /api/v1/admin/jobs/runs`
- UI: `/admin`
- Manual trigger: `POST /api/v1/admin/jobs/{ingest|rescore|strategy|stale}`

## SSE behavior
SSE is used (instead of WebSockets) for simple one-way dashboard refreshes and lower operational complexity.
- Backend emits `id`, `data`, `retry`, and heartbeat comments.
- Frontend deduplicates events by version and reconnects automatically.
- UI shows realtime connection status badge.

## Local run
# Career Radar (Locked MVP)

Career Radar is a compact career-intelligence MVP that ingests opportunities, scores them, derives signals, and produces deterministic action recommendations.

## Architecture (high level)
- **Backend:** FastAPI + SQLAlchemy + Alembic + APScheduler
- **Frontend:** Next.js + TypeScript
- **Database:** PostgreSQL
- **Realtime:** SSE (`/api/v1/events/stream`)
- **Data flows:**
- Opportunity ingest/connectors/CSV
- Scoring engine
- Opportunity signals + company intelligence signals
- Deterministic decision engine recommendations
- Scheduler + admin-triggered jobs

## Quick start
```bash
cp .env.example .env
cp .env.example .env 2>/dev/null || true
make up
```

Then open:
- Frontend: http://localhost:3000
- Backend docs: http://localhost:8000/docs

## Useful commands
Useful commands:
```bash
make migrate
make test-backend
Expand All @@ -63,16 +33,11 @@ make logs
make down
```

## Seed data
On first backend startup, seed inserts:
- sample transition profile
- default scoring weights
- feature flags
- sample company + network node
- seeded opportunity (pre-scored)
- initial signals
## Operations guide
See **[HOWTO.md](./HOWTO.md)** for full local operation details, scheduler cadence, connector/recommendation behavior, troubleshooting, and current MVP limitations.

## CI
GitHub Actions runs backend and frontend validation on push and pull request.

## Known limitations / next steps
- LLM strategy path currently reuses deterministic output when no provider integration is configured.
- Auth/multi-user tenancy not implemented in v1.
- Network view uses structured panel; graph visualization can be added later.
## MVP scope lock
This repo is intentionally locked to a compact MVP. New major capabilities should be captured as future work, not added in this stabilization pass.
37 changes: 37 additions & 0 deletions backend/alembic/versions/0003_company_signals.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
"""company signals
Revision ID: 0003_company_signals
Revises: 0002_signals_job_runs
Create Date: 2026-03-09
"""
from alembic import op
import sqlalchemy as sa

revision = "0003_company_signals"
down_revision = "0002_signals_job_runs"
branch_labels = None
depends_on = None


def upgrade() -> None:
op.create_table(
"company_signals",
sa.Column("id", sa.Integer(), primary_key=True),
sa.Column("company_id", sa.Integer(), sa.ForeignKey("companies.id"), nullable=False),
sa.Column("signal_type", sa.String(40), nullable=False),
sa.Column("severity", sa.String(20), nullable=False),
sa.Column("title", sa.String(180), nullable=False),
sa.Column("description", sa.Text(), nullable=False),
sa.Column("source_url", sa.String(500), nullable=False),
sa.Column("detected_at", sa.DateTime(), nullable=False),
)
op.create_index("ix_company_signals_company_id", "company_signals", ["company_id"])
op.create_index("ix_company_signals_signal_type", "company_signals", ["signal_type"])
op.create_index("ix_company_signals_detected_at", "company_signals", ["detected_at"])


def downgrade() -> None:
op.drop_index("ix_company_signals_detected_at", table_name="company_signals")
op.drop_index("ix_company_signals_signal_type", table_name="company_signals")
op.drop_index("ix_company_signals_company_id", table_name="company_signals")
op.drop_table("company_signals")
46 changes: 46 additions & 0 deletions backend/alembic/versions/0004_recommendations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
"""recommendations

Revision ID: 0004_recommendations
Revises: 0003_company_signals
Create Date: 2026-03-09
"""
from alembic import op
import sqlalchemy as sa

revision = "0004_recommendations"
down_revision = "0003_company_signals"
branch_labels = None
depends_on = None


def upgrade() -> None:
op.create_table(
"recommendations",
sa.Column("id", sa.Integer(), primary_key=True),
sa.Column("recommendation_type", sa.String(40), nullable=False),
sa.Column("entity_type", sa.String(30), nullable=False),
sa.Column("entity_id", sa.Integer(), nullable=False),
sa.Column("decision_score", sa.Float(), nullable=False),
sa.Column("urgency", sa.String(20), nullable=False),
sa.Column("confidence", sa.Float(), nullable=False),
sa.Column("title", sa.String(200), nullable=False),
sa.Column("reason_summary", sa.Text(), nullable=False),
sa.Column("suggested_action", sa.Text(), nullable=False),
sa.Column("created_at", sa.DateTime(), nullable=False),
sa.Column("expires_at", sa.DateTime(), nullable=True),
sa.Column("status", sa.String(20), nullable=False),
)
op.create_index("ix_recommendations_recommendation_type", "recommendations", ["recommendation_type"])
op.create_index("ix_recommendations_entity_type", "recommendations", ["entity_type"])
op.create_index("ix_recommendations_entity_id", "recommendations", ["entity_id"])
op.create_index("ix_recommendations_urgency", "recommendations", ["urgency"])
op.create_index("ix_recommendations_status", "recommendations", ["status"])


def downgrade() -> None:
op.drop_index("ix_recommendations_status", table_name="recommendations")
op.drop_index("ix_recommendations_urgency", table_name="recommendations")
op.drop_index("ix_recommendations_entity_id", table_name="recommendations")
op.drop_index("ix_recommendations_entity_type", table_name="recommendations")
op.drop_index("ix_recommendations_recommendation_type", table_name="recommendations")
op.drop_table("recommendations")
Loading
Loading