A lightweight Gradio GUI for managing your local HuggingFace model cache.
If you work with HuggingFace models in Python, your ~/.cache/huggingface/hub
directory grows over time. This tool gives you a visual way to inspect, clean up,
download, and export cached models — without leaving your browser.
| Feature | Description |
|---|---|
| List | View all cached repos sorted by size, with file counts, revision counts, and timestamps |
| Info | Drill into any repo: see revisions, individual files, sizes, and cache paths |
| Delete | Remove repos (all revisions) directly from the cache |
| Download | Pull new models/datasets/spaces from the Hub into your local cache |
| Export | Copy a cached repo to another directory — as a raw folder or a .zip archive |
git clone https://github.com/YOUR_USERNAME/hf_cache_gui.git
cd hf_cache_gui
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtsource venv/bin/activate
python app.pyOpen http://localhost:7860 in your browser.
- Click Refresh to scan your HuggingFace cache.
- Click any row in the table (or use the dropdown) to select a model.
- Use Info to see detailed file-level information.
- Use Delete to remove a model from cache.
- Use Download to fetch a new repo from the Hub by its ID (e.g.
google/gemma-2b). - Use Export to copy or zip a cached model to any directory.
If your HuggingFace cache is not in the default location, set the environment variable before launching:
export HF_HOME=/path/to/your/hf/cache
python app.pyTo download gated or private models, you can either:
- Use the UI — paste your token into the "HF Token" field in the Download section.
- Log in via CLI (token is cached and reused automatically):
huggingface-cli login
- Set an environment variable:
export HF_TOKEN=hf_your_token_here python app.py
- Python 3.10+
gradio >= 4.0huggingface_hub >= 0.20.0
MIT