Skip to content

mlalic2358/ai-python-chat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ai-python-chat

FastAPI chat API that:

  • stores conversations + messages in Postgres (Docker)
  • calls an OpenAI model to generate assistant replies
  • persists both the user message and assistant reply for each request

Requirements

  • Python 3.10+ (recommended: use a virtualenv)
  • Docker Desktop (for Postgres)
  • An OpenAI API key

Stack

  • API: FastAPI + Uvicorn (api/main.py, api/routes.py)
  • DB: Postgres (Docker Compose) + SQLAlchemy ORM (db/models.py, db/session.py, db/repository.py)
  • LLM: OpenAI (llm/llm_client.py)

Persistence

Postgres runs in Docker and stores data in a named Docker volume (postgres_data). Data persists across container restarts, and is removed only if you delete the volume (e.g. docker compose down -v).

Tables are created via create_tables.py (dev bootstrap). If you later want migrations, add Alembic.

Setup

  1. Start Postgres:
docker compose up -d
  1. Create a local .env (repo root) and keep it out of git:
DATABASE_URL=postgresql+psycopg://ai_user:ai_password@localhost:5433/ai_python_chat
OPENAI_API_KEY=sk-...
OPENAI_MODEL=<your-model-id>
  1. Install Python dependencies (example using uv):
uv venv
source .venv/bin/activate
uv pip install fastapi uvicorn sqlalchemy psycopg python-dotenv openai
  1. Create tables:
python3 -m dotenv run -- python3 create_tables.py
  1. Run the API:
uvicorn api.main:app --reload --env-file .env

Open interactive docs at http://127.0.0.1:8000/docs.

UI (Streamlit)

This repo includes a minimal Streamlit UI that calls the FastAPI endpoints.

  1. Install Streamlit:
uv pip install streamlit
  1. Run the API (in one terminal):
uvicorn api.main:app --reload --env-file .env
  1. Run the UI (in another terminal):
streamlit run ui/app.py

By default the UI calls http://127.0.0.1:8000. You can override it via the sidebar or by setting API_BASE_URL.

The UI supports both text replies and TTS audio replies (using the /conversations/{conversation_id}/tts endpoint).

API endpoints

  • GET /conversations — list conversations
  • POST /conversations?title=... — create a conversation
  • GET /conversations/{conversation_id}/messages — list message history for a conversation
  • POST /conversations/{conversation_id}/messages?content=... — append a user message, call OpenAI, store and return the assistant reply

Example usage (curl)

Create a conversation:

curl -X POST "http://127.0.0.1:8000/conversations?title=Test"

Send a message (replace <id> with the returned conversation_id):

curl -X POST "http://127.0.0.1:8000/conversations/<id>/messages" \
  --data-urlencode "content=Hello! Can you summarize what this service does?"

Fetch history:

curl "http://127.0.0.1:8000/conversations/<id>/messages"

Troubleshooting

  • Port conflicts: this repo maps container 5432 to host 5433 in docker-compose.yml. If you change it, update DATABASE_URL accordingly.
  • Secrets: never commit .env. If you accidentally share an API key, revoke/rotate it in the OpenAI dashboard.

About

In this project we create a chat which relies on an AI agent and stores the previous query requests in a postgres database.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages