FastAPI chat API that:
- stores conversations + messages in Postgres (Docker)
- calls an OpenAI model to generate assistant replies
- persists both the user message and assistant reply for each request
- Python 3.10+ (recommended: use a virtualenv)
- Docker Desktop (for Postgres)
- An OpenAI API key
- API: FastAPI + Uvicorn (
api/main.py,api/routes.py) - DB: Postgres (Docker Compose) + SQLAlchemy ORM (
db/models.py,db/session.py,db/repository.py) - LLM: OpenAI (
llm/llm_client.py)
Postgres runs in Docker and stores data in a named Docker volume (postgres_data). Data persists across container restarts, and is removed only if you delete the volume (e.g. docker compose down -v).
Tables are created via create_tables.py (dev bootstrap). If you later want migrations, add Alembic.
- Start Postgres:
docker compose up -d- Create a local
.env(repo root) and keep it out of git:
DATABASE_URL=postgresql+psycopg://ai_user:ai_password@localhost:5433/ai_python_chat
OPENAI_API_KEY=sk-...
OPENAI_MODEL=<your-model-id>- Install Python dependencies (example using
uv):
uv venv
source .venv/bin/activate
uv pip install fastapi uvicorn sqlalchemy psycopg python-dotenv openai- Create tables:
python3 -m dotenv run -- python3 create_tables.py- Run the API:
uvicorn api.main:app --reload --env-file .envOpen interactive docs at http://127.0.0.1:8000/docs.
This repo includes a minimal Streamlit UI that calls the FastAPI endpoints.
- Install Streamlit:
uv pip install streamlit- Run the API (in one terminal):
uvicorn api.main:app --reload --env-file .env- Run the UI (in another terminal):
streamlit run ui/app.pyBy default the UI calls http://127.0.0.1:8000. You can override it via the sidebar or by setting API_BASE_URL.
The UI supports both text replies and TTS audio replies (using the /conversations/{conversation_id}/tts endpoint).
GET /conversations— list conversationsPOST /conversations?title=...— create a conversationGET /conversations/{conversation_id}/messages— list message history for a conversationPOST /conversations/{conversation_id}/messages?content=...— append a user message, call OpenAI, store and return the assistant reply
Create a conversation:
curl -X POST "http://127.0.0.1:8000/conversations?title=Test"Send a message (replace <id> with the returned conversation_id):
curl -X POST "http://127.0.0.1:8000/conversations/<id>/messages" \
--data-urlencode "content=Hello! Can you summarize what this service does?"Fetch history:
curl "http://127.0.0.1:8000/conversations/<id>/messages"- Port conflicts: this repo maps container
5432to host5433indocker-compose.yml. If you change it, updateDATABASE_URLaccordingly. - Secrets: never commit
.env. If you accidentally share an API key, revoke/rotate it in the OpenAI dashboard.