- Python 3.10+
- Docker (for running models backend)
- pip
git clone <repository-url>
cd AI-Testing-Platform# Create virtual environment
python -m venv myenv
source myenv/bin/activate # On Windows, run: myenv\Scripts\activate
# Install dependencies
pip install -r requirements.txtGPU Version:
docker-compose -f docker/ollama/docker-compose.yml up -dCPU-only Version:
docker-compose -f docker/ollama/docker-compose-cpu.yml up -dAfter Ollama is running, pull the models you want to test:
# Pull models for testing and for evaluation
docker exec -it avise-ollama ollama pull <model_name>Edit src/configs/model.json:
{
"testable_model": "X",
"evaluation_model": "Y",
"api_url": "http://localhost:11434" #Ollama default
}python -m src.runner -test <test_name> -modelconf <path> -testconf <path> [options]| Argument | Description |
|---|---|
-test |
Test to run (e.g., prompt_injection, context_test) |
-modelconf |
Path to model configuration JSON |
-testconf |
Path to test configuration JSON |
| Argument | Description |
|---|---|
-format |
Report format: json, html, md |
-output |
Custom output file path |
-reports-dir |
Base directory for reports (default: reports/) |
-apikey |
API key for authenticated APIs |
-list |
List available tests and formats |
-v |
Enable verbose logging |