Conversation
|
|
||
| The Operator manifest creates all required `Roles`, `ClusterRoles`, and `bindings`. | ||
|
|
||
|
|
There was a problem hiding this comment.
@HarshCasper wondering about maintaining this table, I think we should automate this. I was thinking to make a sub issue for that task, may I assign it to you for next week?
There was a problem hiding this comment.
I have a POC in #378 that could be adapted probably
Deploying localstack-docs with
|
| Latest commit: |
6315003
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://6852d11b.localstack-docs.pages.dev |
| Branch Preview URL: | https://docs-new-k8-section.localstack-docs.pages.dev |
fyi @simonrw @mmaureenliu keep in mind the current manual markup table we added is not responsive (normal behavior for table markup): https://6852d11b.localstack-docs.pages.dev/aws/enterprise/kubernetes/kubernetes-operator/#permissions We need to incorporate the same table component we're using in aws API Coverage tables, such as this one: cc @HarshCasper |
| @@ -0,0 +1,100 @@ | |||
| --- | |||
| title: Concepts | |||
| description: Concepts & Architecture | |||
There was a problem hiding this comment.
Should we add a diagram here similar to the one we have in Notion ?
There was a problem hiding this comment.
Yes, we can add the one in Notion. I'll add a commit in a bit.
simonrw
left a comment
There was a problem hiding this comment.
In general a great restructuring of the Kubernetes section, thank you! However I have a few comments
| description: Install and run LocalStack on Kubernetes using the official Helm chart. | ||
| template: doc | ||
| sidebar: | ||
| order: 3 |
There was a problem hiding this comment.
Could we make the helm chart 4 and the operator 3 to emphasise the operator more?
| tags: ["Enterprise"] | ||
| --- | ||
|
|
||
| ## Overview |
There was a problem hiding this comment.
nit: having the first heading of the Overview section called Overview is a little repetitive. What about skipping this heading?
|
|
||
| LocalStack is a local AWS cloud environment that emulates core AWS services for development and testing. | ||
|
|
||
| When deployed on Kubernetes, services that typically spawn Docker containers (such as Lambda, ECS, or RDS) instead spawn Kubernetes pods within the same cluster. Behavior is improved by allowing dynamic scaling, isolation, and native Kubernetes orchestration. |
There was a problem hiding this comment.
When deployed on Kubernetes
Technically it still requires for the user to opt in to this situation by setting CONTAINER_RUNTIME=kubernetes. The Operator does this by default, but it's not guaranteed that the user will do this.
| When deployed on Kubernetes, services that typically spawn Docker containers (such as Lambda, ECS, or RDS) instead spawn Kubernetes pods within the same cluster. Behavior is improved by allowing dynamic scaling, isolation, and native Kubernetes orchestration. | |
| When LocalStack is deployed on Kubernetes and Kubernetes support is enabled, services that typically spawn Docker containers (such as Lambda, ECS, or RDS) instead spawn Kubernetes pods within the same cluster. Behavior is improved by allowing dynamic scaling, isolation, and native Kubernetes orchestration. |
I'm not a huge fan of this suggestion though.
|
|
||
| Supported cases: | ||
|
|
||
| - Local Development Environments: Provide isolated, consistent environments for individual developers or small teams. |
There was a problem hiding this comment.
I think we are missing a common use case here
| - Local Development Environments: Provide isolated, consistent environments for individual developers or small teams. | |
| - Local Development Environments: Provide isolated, consistent environments for individual developers or small teams. | |
| - Hosted Development Environments: Provide scalable and isolated development environments for teams. |
|
|
||
| ## Requirements: | ||
|
|
||
| - K8s Cluster (k3d, minikube) |
There was a problem hiding this comment.
| - K8s Cluster (k3d, minikube) | |
| - K8s Cluster (such as k3d, minikube, EKS) |
| ::: | ||
|
|
||
| #### Auth token from a Kubernetes Secret | ||
|
|
||
| If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`: | ||
|
|
||
| ```yaml | ||
| extraEnvVars: | ||
| - name: LOCALSTACK_AUTH_TOKEN | ||
| valueFrom: | ||
| secretKeyRef: | ||
| name: <name of the secret> | ||
| key: <name of the key in the secret containing the API key> | ||
| ``` |
There was a problem hiding this comment.
I think we need to include this section in the callout as it refers to the values.yml file. If a user skips the callout then they skip straight to "you can reference it from... without telling them where to put it.
| ::: | |
| #### Auth token from a Kubernetes Secret | |
| If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`: | |
| ```yaml | |
| extraEnvVars: | |
| - name: LOCALSTACK_AUTH_TOKEN | |
| valueFrom: | |
| secretKeyRef: | |
| name: <name of the secret> | |
| key: <name of the key in the secret containing the API key> | |
| ``` | |
| #### Auth token from a Kubernetes Secret | |
| If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`: | |
| ```yaml | |
| extraEnvVars: | |
| - name: LOCALSTACK_AUTH_TOKEN | |
| valueFrom: | |
| secretKeyRef: | |
| name: <name of the secret> | |
| key: <name of the key in the secret containing the API key> |
:::
| ``` | ||
|
|
||
| :::note | ||
| Keep the existing **parameters table** in this page (or embed it as a collapsible section). |
There was a problem hiding this comment.
Is this line an internal note, or are we keeping it in the docs? If so I don't understand what it means
|
|
||
| ### Set Pod resource requests and limits | ||
|
|
||
| Some environments (notably **EKS on Fargate**) may terminate Pods with low/default resource allocations. Consider setting explicit requests/limits: |
There was a problem hiding this comment.
| Some environments (notably **EKS on Fargate**) may terminate Pods with low/default resource allocations. Consider setting explicit requests/limits: | |
| Some environments (notably **EKS on Fargate**) may terminate the LocalStack pod if not configured with reasonable requests/limits: |
| memory: 2Gi | ||
| ``` | ||
|
|
||
| ### Add env vars and startup scripts |
There was a problem hiding this comment.
| ### Add env vars and startup scripts | |
| ### Add environment variables and startup scripts |
| ```bash | ||
| kubectl port-forward -n <namespace> <pod-name> 4566 | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Maybe can we add a section like this on proxy networking?
Certificate issues when spawning child pods
If you are experiencing an error similar to
localstack.services.lambda_.invocation.assignment.AssignmentException: Could not start new environment: MaxRetryError:MyHTTPSConnectionPool(host='192.168.0.1', port=443): Max retries exceeded with url: /api/v1/namespaces/ns-perf-a39e28bf-c600-498d-9ecc-41419eca1007/pods/lambda-pod-52c280f8dd194dc72bced60e190db6ef/log (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1032)')))
when creating child pods (a Lambda pod in the example above) then your proxy settings may be being applied to cluster internal communication.
If you are using HTTP_PROXY or HTTPS_PROXY environment variables to configure a TLS terminating proxy server (for example in corporate environments), then you may need to add the Kubernetes API server IP address to the NO_PROXY environment variable. With the example above, add NO_PROXY=192.168.0.1 to your pod environment variables.

Preview URL w/ new k8 docs:
https://6852d11b.localstack-docs.pages.dev/aws/enterprise/kubernetes/
Resolves: