Add ResourceConfig for zero-config resource provisioning and Pythonic client wrappers#5
Draft
Copilot wants to merge 2 commits intocopilot/add-cloud-resource-provisioningfrom
Draft
Conversation
…lts and healthchecks Co-authored-by: Karthik777 <7102951+Karthik777@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Add resource config export and Pythonic client wrappers
Add ResourceConfig for zero-config resource provisioning and Pythonic client wrappers
Feb 25, 2026
This was referenced Feb 25, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Completes developer experience by enabling
stack()with zero arguments to provision working Postgres + Redis, and providing Pythonic client wrappers for all resource types.Changes
fastops/connect.py(new, 358 lines)ResourceConfigclass for config management:from_env()/load()/save()- JSON persistenceto_env()/to_dotenv()- export as dict or .env fileconnect(resource_name)- returns ready-to-use Python client_detect_resource_groups()infers resource types from env varsImportErrormessages:fastops/resources.pyDEFAULTS = {'db': database, 'cache': cache}- stack() uses when resources=Nonebucket(name='data')- was required, now optionalllm(provider='docker')- auto-switches gpt-4o → llama3.2 for local devpyproject.tomldb,storage,llm,cache,queue,search,azure,aws,alldeployment,infrastructure,cloud,resourcesfastops/__init__.pyResourceConfigat top levelUsage
All resource functions (
database(),cache(),queue(),bucket(),llm(),search()) now work with zero arguments.Original prompt
Overview
This is the P0 PR that completes the developer experience for fastops resources. It adds three things:
fastops/connect.py—ResourceConfigclass for config export + Pythonic client wrapperspyproject.tomlupdates — nixpacks as core dep, optional extras for client librariesdatabase()with zero args and get a working PostgresBranch off
copilot/add-cloud-resource-provisioningwhich already hasresources.py,ship.py, etc.File 1:
fastops/connect.pyModule docstring
"""Resource config export and Pythonic client wrappers. Turns env dicts into saveable configs and ready-to-use Python clients."""__all__['ResourceConfig']Class:
ResourceConfigA config object holding all resource connection details. Created from
stack()output or loaded from a saved config file. Provides.connect()to get ready-to-use Python client objects.Constructor
__init__(self, resources=None)—self._resources = dict(resources or {})Class methods
ResourceConfig.from_env(cls, env_dict)Build config from the merged env dict returned by
stack(). Auto-detect resource types from env var patterns using_detect_resource_groups(env_dict). Returns aResourceConfiginstance.ResourceConfig.load(cls, path='resources.json')Load config from JSON file.
return cls(json.loads(Path(path).read_text()))Instance methods
.save(self, path='resources.json')Save config to JSON file.
Path(path).write_text(json.dumps(self._resources, indent=2)). Returnpath..to_env(self)Flatten back to a dict of env vars (skip keys starting with
_). Returns a flat{key: value}dict..to_dotenv(self, path='.env')Write a
.envfile. Each line isKEY=VALUE. Returnpath..connect(self, resource_name)Return a ready-to-use Python client for the named resource. Look up
group = self._resources[resource_name], checkgroup['_type'], and dispatch to the appropriate_connect_*function:_typevaluepostgres,mysql,sqlite_connect_database(group)mongo_connect_mongo(group)redis_connect_redis(group)minio,s3,azure_blob,gcs_connect_storage(group)rabbitmq,sqs,servicebus,pubsub_connect_queue(group)elasticsearch,opensearch,azure_search_connect_search(group)openai,azure_openai,ollama,bedrock_connect_llm(group)If type unknown, raise
ValueErrorwith helpful message listing available resources.__getitem__(self, key)— returnself._resources[key](raw config dict)__contains__(self, key)— returnkey in self._resources__repr__(self)—ResourceConfig(db(postgres), cache(redis), ...)format.namesproperty —list(self._resources.keys())Connector functions (module-level, private)
All connectors use lazy imports with clear
ImportErrormessages telling users exactly what topip install._connect_database(group)Try
from fastsql import database; return database(group['DATABASE_URL']).Fallback:
import sqlalchemy; return sqlalchemy.create_engine(url).connect().ImportError message:
'Install fastsql (pip install fastsql) or sqlalchemy to connect to databases.'_connect_mongo(group)from pymongo import MongoClient; return MongoClient(group['DATABASE_URL'])_connect_redis(group)import redis; return redis.Redis.from_url(group['REDIS_URL'])_connect_storage(group)Use
fsspecfor ALL storage backends:minio/docker:fsspec.filesystem('s3', key=..., secret=..., client_kwargs={'endpoint_url': ...})aws/s3:fsspec.filesystem('s3')(default AWS creds)azure/azure_blob:fsspec.filesystem('abfs', connection_string=...)gcp/gcs:fsspec.filesystem('gcs')(default GCP creds)_connect_queue(group)rabbitmq/docker:import pika; return pika.BlockingConnection(pika.URLParameters(url)).channel()sqs:import boto3; return boto3.client('sqs')servicebus:from azure.servicebus import ServiceBusClient; return ServiceBusClient.from_connection_string(url)pubsub:from google.cloud import pubsub_v1; return pubsub_v1.PublisherClient()_connect_search(group)elasticsearch/docker:from elasticsearch import Elasticsearch; return Elasticsearch(url)opensearch:from opensearchpy import OpenSearch; return OpenSearch(hosts=[url])azure_search:from azure.search.documents import SearchClientwithAzureKeyCredential_connect_llm(group)Use lisette (AnswerDotAI's litellm wrapper) as primary, with
openaias fallback:openai,azure_openai,ollamaproviders:from lisette import Chat; return Chat(group.get('LLM_MODEL', 'gpt-4o'))— lisette handles all providers via litellmimport openaithen:This pull request was created from Copilot chat.
🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.