seata-k8s is a Kubernetes operator for deploying and managing Apache Seata distributed transaction servers. It provides a streamlined way to deploy Seata Server clusters on Kubernetes with automatic scaling, persistence management, and operational simplicity.
- 🚀 Easy Deployment: Deploy Seata Server clusters using Kubernetes CRDs
- 📈 Auto Scaling: Simple scaling through replica configuration
- 💾 Persistence Management: Built-in support for persistent volumes
- 🔐 RBAC Support: Comprehensive role-based access control
- 🛠️ Developer Friendly: Includes debugging and development tools
- Apache Seata - Distributed transaction framework
- Seata Samples - Example implementations
- Seata Docker - Docker image repository
- Kubernetes 1.16+ cluster
- kubectl configured with access to your cluster
- Make and Docker (for building images)
To deploy Seata Server using the Operator method, follow these steps:
git clone https://github.com/apache/incubator-seata-k8s.git cd incubator-seata-k8sDeploy the controller, CRD, RBAC, and other required resources:
make deployVerify the deployment:
kubectl get deployment -n seata-k8s-controller-manager kubectl get pods -n seata-k8s-controller-managerCreate a SeataServer resource. Here's an example based on seata-server-cluster.yaml:
apiVersion: operator.seata.apache.org/v1alpha1 kind: SeataServer metadata: name: seata-server namespace: default spec: serviceName: seata-server-cluster replicas: 3 image: apache/seata-server:latest persistence: volumeReclaimPolicy: Retain spec: resources: requests: storage: 5GiApply it to your cluster:
kubectl apply -f seata-server.yamlIf everything is working correctly, the operator will:
- Create 3 StatefulSet replicas
- Create a Headless Service named
seata-server-cluster - Set up persistent volumes
Access the Seata Server cluster within your Kubernetes network:
seata-server-0.seata-server-cluster.default.svc seata-server-1.seata-server-cluster.default.svc seata-server-2.seata-server-cluster.default.svc For complete CRD definitions, see seataservers_crd.yaml.
| Property | Description | Default | Example |
|---|---|---|---|
serviceName | Name of the Headless Service | - | seata-server-cluster |
replicas | Number of Seata Server replicas | 1 | 3 |
image | Seata Server container image | - | apache/seata-server:latest |
ports.consolePort | Console port | 7091 | 7091 |
ports.servicePort | Service port | 8091 | 8091 |
ports.raftPort | Raft consensus port | 9091 | 9091 |
resources | Container resource requests/limits | - | See example below |
persistence.volumeReclaimPolicy | Volume reclaim policy | Retain | Retain or Delete |
persistence.spec.resources.requests.storage | Persistent volume size | - | 5Gi |
env | Environment variables | - | See example below |
Configure Seata Server settings using environment variables and Kubernetes Secrets:
apiVersion: operator.seata.apache.org/v1alpha1 kind: SeataServer metadata: name: seata-server namespace: default spec: image: apache/seata-server:latest replicas: 1 persistence: spec: resources: requests: storage: 5Gi env: - name: console.user.username value: seata - name: console.user.password valueFrom: secretKeyRef: name: seata-credentials key: password --- apiVersion: v1 kind: Secret metadata: name: seata-credentials namespace: default type: Opaque stringData: password: your-secure-passwordTo debug and develop this operator locally, we recommend using Minikube or a similar local Kubernetes environment.
Modify the code and rebuild the controller image:
# Start minikube and set docker environment minikube start eval $(minikube docker-env) # Build and deploy make docker-build deploy # Verify deployment kubectl get deployment -n seata-k8s-controller-managerUse Telepresence to debug locally without building container images.
Prerequisites:
- Install Telepresence CLI
- Install Traffic Manager
Steps:
- Connect Telepresence to your cluster:
telepresence connect telepresence status # Verify connection- Generate code resources:
make manifests generate fmt vet- Run the controller locally using your IDE or command line:
go run .Now your local development environment has access to the Kubernetes cluster's DNS and services.
This method deploys Seata Server directly using Kubernetes manifests without the operator. Note that Seata Docker images currently require link-mode for container communication.
- MySQL database
- Nacos registry server
- Access to Kubernetes cluster
Deploy Seata server, Nacos, and MySQL:
kubectl apply -f deploy/seata-deploy.yaml kubectl apply -f deploy/seata-service.yamlkubectl get service # Note the NodePort IPs and ports for Seata and NacosUpdate example/example-deploy.yaml with the NodePort IP addresses obtained above.
# Connect to MySQL and import Seata table schema # Replace CLUSTER_IP with your MySQL service IP mysql -h <CLUSTER_IP> -u root -p < path/to/seata-db-schema.sqlDeploy the sample microservices:
# Deploy account and storage services kubectl apply -f example/example-deploy.yaml kubectl apply -f example/example-service.yaml # Deploy order service kubectl apply -f example/order-deploy.yaml kubectl apply -f example/order-service.yaml # Deploy business service kubectl apply -f example/business-deploy.yaml kubectl apply -f example/business-service.yamlOpen Nacos console to verify service registration:
http://localhost:8848/nacos/ Check that all services are registered:
- account-service
- storage-service
- order-service
- business-service
Test the distributed transaction scenarios using the following curl commands:
curl -H "Content-Type: application/json" \ -X POST \ --data '{"id":1,"userId":"1","amount":100}' \ http://<CLUSTER_IP>:8102/account/dec_accountcurl -H "Content-Type: application/json" \ -X POST \ --data '{"commodityCode":"C201901140001","count":100}' \ http://<CLUSTER_IP>:8100/storage/dec_storagecurl -H "Content-Type: application/json" \ -X POST \ --data '{"userId":"1","commodityCode":"C201901140001","orderCount":10,"orderAmount":100}' \ http://<CLUSTER_IP>:8101/order/create_ordercurl -H "Content-Type: application/json" \ -X POST \ --data '{"userId":"1","commodityCode":"C201901140001","count":10,"amount":100}' \ http://<CLUSTER_IP>:8104/business/dubbo/buyReplace <CLUSTER_IP> with the actual NodePort IP address of your service.