Skip to content

Add ResourceConfig for zero-config resource provisioning and Pythonic client wrappers#5

Draft
Copilot wants to merge 2 commits intocopilot/add-cloud-resource-provisioningfrom
copilot/add-resource-config-export
Draft

Add ResourceConfig for zero-config resource provisioning and Pythonic client wrappers#5
Copilot wants to merge 2 commits intocopilot/add-cloud-resource-provisioningfrom
copilot/add-resource-config-export

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Feb 25, 2026

Completes developer experience by enabling stack() with zero arguments to provision working Postgres + Redis, and providing Pythonic client wrappers for all resource types.

Changes

fastops/connect.py (new, 358 lines)

  • ResourceConfig class for config management:
    • from_env() / load() / save() - JSON persistence
    • to_env() / to_dotenv() - export as dict or .env file
    • connect(resource_name) - returns ready-to-use Python client
  • Auto-detection: _detect_resource_groups() infers resource types from env vars
  • Connector functions with lazy imports, clear ImportError messages:
    • Database (fastsql/sqlalchemy), Mongo, Redis, Storage (fsspec), Queue, Search, LLM (lisette/openai)

fastops/resources.py

  • DEFAULTS = {'db': database, 'cache': cache} - stack() uses when resources=None
  • bucket(name='data') - was required, now optional
  • llm(provider='docker') - auto-switches gpt-4o → llama3.2 for local dev
  • Healthchecks on all Docker services (pg_isready, redis-cli ping, etc.)

pyproject.toml

  • Optional dependency groups: db, storage, llm, cache, queue, search, azure, aws, all
  • Keywords: added deployment, infrastructure, cloud, resources

fastops/__init__.py

  • Export ResourceConfig at top level

Usage

from fastops import stack, ResourceConfig

# Zero-config: get Postgres + Redis
env, compose, volumes = stack()

# Manage config
config = ResourceConfig.from_env(env)
config.save('resources.json')           # persist
config.to_dotenv('.env')                # export for docker
db = config.connect('db')               # get fastsql/sqlalchemy client

# Custom stack still works
stack({
    'db': lambda: database(engine='mysql'),
    'storage': lambda: bucket(name='files')
})

All resource functions (database(), cache(), queue(), bucket(), llm(), search()) now work with zero arguments.

Original prompt

Overview

This is the P0 PR that completes the developer experience for fastops resources. It adds three things:

  1. fastops/connect.pyResourceConfig class for config export + Pythonic client wrappers
  2. pyproject.toml updates — nixpacks as core dep, optional extras for client libraries
  3. Practical defaults across all functions — so beginners can call database() with zero args and get a working Postgres

Branch off copilot/add-cloud-resource-provisioning which already has resources.py, ship.py, etc.


File 1: fastops/connect.py

Module docstring

"""Resource config export and Pythonic client wrappers. Turns env dicts into saveable configs and ready-to-use Python clients."""

__all__

['ResourceConfig']

Class: ResourceConfig

A config object holding all resource connection details. Created from stack() output or loaded from a saved config file. Provides .connect() to get ready-to-use Python client objects.

Constructor

__init__(self, resources=None)self._resources = dict(resources or {})

Class methods

ResourceConfig.from_env(cls, env_dict)
Build config from the merged env dict returned by stack(). Auto-detect resource types from env var patterns using _detect_resource_groups(env_dict). Returns a ResourceConfig instance.

ResourceConfig.load(cls, path='resources.json')
Load config from JSON file. return cls(json.loads(Path(path).read_text()))

Instance methods

.save(self, path='resources.json')
Save config to JSON file. Path(path).write_text(json.dumps(self._resources, indent=2)). Return path.

.to_env(self)
Flatten back to a dict of env vars (skip keys starting with _). Returns a flat {key: value} dict.

.to_dotenv(self, path='.env')
Write a .env file. Each line is KEY=VALUE. Return path.

.connect(self, resource_name)
Return a ready-to-use Python client for the named resource. Look up group = self._resources[resource_name], check group['_type'], and dispatch to the appropriate _connect_* function:

_type value Connector function
postgres, mysql, sqlite _connect_database(group)
mongo _connect_mongo(group)
redis _connect_redis(group)
minio, s3, azure_blob, gcs _connect_storage(group)
rabbitmq, sqs, servicebus, pubsub _connect_queue(group)
elasticsearch, opensearch, azure_search _connect_search(group)
openai, azure_openai, ollama, bedrock _connect_llm(group)

If type unknown, raise ValueError with helpful message listing available resources.

__getitem__(self, key) — return self._resources[key] (raw config dict)

__contains__(self, key) — return key in self._resources

__repr__(self)ResourceConfig(db(postgres), cache(redis), ...) format

.names propertylist(self._resources.keys())

Connector functions (module-level, private)

All connectors use lazy imports with clear ImportError messages telling users exactly what to pip install.

_connect_database(group)
Try from fastsql import database; return database(group['DATABASE_URL']).
Fallback: import sqlalchemy; return sqlalchemy.create_engine(url).connect().
ImportError message: 'Install fastsql (pip install fastsql) or sqlalchemy to connect to databases.'

_connect_mongo(group)
from pymongo import MongoClient; return MongoClient(group['DATABASE_URL'])

_connect_redis(group)
import redis; return redis.Redis.from_url(group['REDIS_URL'])

_connect_storage(group)
Use fsspec for ALL storage backends:

  • minio/docker: fsspec.filesystem('s3', key=..., secret=..., client_kwargs={'endpoint_url': ...})
  • aws/s3: fsspec.filesystem('s3') (default AWS creds)
  • azure/azure_blob: fsspec.filesystem('abfs', connection_string=...)
  • gcp/gcs: fsspec.filesystem('gcs') (default GCP creds)

_connect_queue(group)

  • rabbitmq/docker: import pika; return pika.BlockingConnection(pika.URLParameters(url)).channel()
  • sqs: import boto3; return boto3.client('sqs')
  • servicebus: from azure.servicebus import ServiceBusClient; return ServiceBusClient.from_connection_string(url)
  • pubsub: from google.cloud import pubsub_v1; return pubsub_v1.PublisherClient()

_connect_search(group)

  • elasticsearch/docker: from elasticsearch import Elasticsearch; return Elasticsearch(url)
  • opensearch: from opensearchpy import OpenSearch; return OpenSearch(hosts=[url])
  • azure_search: from azure.search.documents import SearchClient with AzureKeyCredential

_connect_llm(group)
Use lisette (AnswerDotAI's litellm wrapper) as primary, with openai as fallback:

  • For openai, azure_openai, ollama providers:
    • Try: from lisette import Chat; return Chat(group.get('LLM_MODEL', 'gpt-4o')) — lisette handles all providers via litellm
    • Fallback to raw openai: import openai then:
      • `opena...

This pull request was created from Copilot chat.


🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

…lts and healthchecks

Co-authored-by: Karthik777 <7102951+Karthik777@users.noreply.github.com>
Copilot AI changed the title [WIP] Add resource config export and Pythonic client wrappers Add ResourceConfig for zero-config resource provisioning and Pythonic client wrappers Feb 25, 2026
Copilot AI requested a review from Karthik777 February 25, 2026 04:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants