Intro
In the last two weeks, i was trying to found a new way to save the secret securely. as you know, k8s is an open-source system for automating deployment, scaling, and management of containerized applications. As part of CNCF. it’s necessary to put some energy into security design. In fact, k8s’s security defend is not good enough. For example. k8s only encode the secret as base64. And In the early days, etcd’s security was not valued. When we get back to k8s’s secret management solution. After a period of research, i found some different ways to build the secret manager solutions. And after reselection, we choice the following four kinds type:
- vault on k8s (Already experimented)
- vault with cert-manager on k8s (Already experimented)
- aws secret manager on k8s (No experiment)
- kubseal on k8s (Already experimented)
k8s cluster in alibaba cloud with 3 master 4 slave.
Some background knowledge
k8s & secret
We avoid to deep into it. Because this blog should be focused on secret solutions.
Architecture & WorkFlow
Firstly, following the picture to have a overview of k8s architecture.
Classically, k8s was deployed as Master/Slave mode. every node can hold on different pod, but pod can’t deployed over different pod. and every container was only single one process. In any node, there was three part on it. Kube-Proxy
, Kubelet
, And Container Runtime. Most time. it was Docker.
And k8s was totally controlled over REST API.
As for Secret. it can be a token, DB password, HTTPS Cert and so on. At least, A secret manage need those function:
- Key Store
- Key Rotation
- Key Share
- Seal/UnSeal
- Authentication/Authorization
when you build a application based on k8s. Configuration was need divide to two part. Plaintext with k8s ConfigMap
, ciphertext with Secret
. Now we look at the workflow.
Controller can watch the state of any pods. Every change was deployed through those parts.
Master Node: Deloyment Controller -> ReplicaSet Controller -> Scheduler Assign pod to Node
Slave Node: Tell Docker to run Container
Access
- Normal User
K8s Support access it from outside with 3 ways(ExternalName was specially):
- ClusterIP (As IAAS, cloud would provide your external IP or elastic IP)
- LoadBalancer ( As IAAS, cloud would provide your load balance from cloud product)
- NodePort ( Services did not need to a signal external ip, you can access it with IP:PORT)
Also, you should know about routemap, APM, log collections and so on.
- Ops
When you access it from inside. Just withkubectl proxy
and subcommand.
kubectl proxy
support you to view the k8s dashboard. when you try to access ithttp://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
, it need to
token withkubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
kubectl portforwad 8080:8080
, this command allow you to view the inside service in your local computer. Also need to attention withnamespace
(default or not? current context ?) &services
For example:kubectl port-forward vault-xxxxxx-xxxxxx 8200
to view vault dashborad.
Peripheral knowledge
helm
useage (helm3 was i removal of Tiller)oh-my-zsh
useage (enable kubectl & helm plugin was helpful)isito
was can help you to control your k8s resources. (look like a control control plane)
vault
Vault is created by hashicorp, this company make lot of changes in devops. Such as vagrant, terraform, packer. I like it. So, let’s talk about vault.
(This image from vault docs)
In one word, all you need is in here. PKI certificate, SSH certificate, Cross region, Cross Cloud, Cross Datacenter and so on.
cert manager
Cert-manager is a native Kubernetes certificate management controller. it mainly relying on Issuers
, and ClusterIssuers
. Firstly, look at this architecture image.
Different issuers provide different seal/unseal ways for secret. Except self sign, All other was need to configured with custom service. it support these types:
- SelfSigned
- CA
- Vault
- Venafi
- External
- ACME
In advance, you can use ACME mode, but it’s not necessary.
aws secret manager
Due to it’s not enough resources, also compared to vault. So i decide not to do this experiment. But in another side, we can found that was be used in Godadday
kubesal
It’s design to encrypt your Secret into a SealedSecret, which is safe to store - even to a public repository. It’s was mainly with two parts. Client side & Server side. After you install, you can encrypt it with local part, and decrypt with server part , also It’s still a controller in k8s.
Checkout this image
Secret In Actions
cert-manager
Step1: install
- with kubectl in k8s
1 |
|
- with helm in k8s
1 |
|
Please noticed: Official demo was Outdated。
Step2: Issuers with selfsigned
if you want sign it with other issuers, please must sure it was exists. for example, if you use it with vault, must install vault-helm
(vault agent on server)
vault
step 1: install vault with helm
- vault-helm( vault agent on k8s)
1 | git clone https://github.com/hashicorp/vault-helm && cd vault-helm |
Please check this tutorial
- kubernets-vault (k8s vault controller)
there was two different ways:
Also i following this Quick start to learn it:
kubesal with k8s
please check the background part to learn the workflow.
Step1: install
client
1
2wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/kubeseal-linux-amd64 -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubesealserver
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/controller.yaml
Step2: useage
1 | i➜ kubeseal-guides ᐅ echo -n bar | kubectl create secret generic mysecret --dry-run --from-file=foo=/dev/stdin -o json >mysecret.json |
Before encrypt:
mysecret.json1
2
3
4
5
6
7
8
9
10
11{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "mysecret",
"creationTimestamp": null
},
"data": {
"foo": "YmFy"
}
}
After Encrypt:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "mysecret",
"namespace": "default",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "mysecret",
"namespace": "default",
"creationTimestamp": null
}
},
"encryptedData": {
"foo": "AgAao2yYWSK7bN/Ll6NlsyESPhJ3ZnPLkikGtd3+y9oJ+p5PuJaPSWAclxsdLjX5nxucdLoEWa53IktzH0PbeWyyyyyyyyyyyyyyyyyyyyyyU0AA5txJX5QjVkCNA9vxIL7XeqLVyi/eno7oEEdA2BXySAK5a6Q3k3oTJ0uTiPJZOYFvsFeWpz2D4qNuKH9h0LqF3vqJVSmZF4QWdYEA1GndEJRAVzxP8V8HT0unss81w3yPt/bAmeunN4AyyyyyyyyyyyyyyyyyyyyyyyyyWadQ5h0LogC+vbBLKxuJzTXFzVRAzYbg6hbGJTZWQu0isSmLJZrwVKiyF54UIPWh4EnTbim/PLrU08CnuLhgGToeA24uwm/5dmmDnC2BvvQyeFi77fj4uLnJMx5LYw5wPYft0nCkowRJmhuu2cqUviUQ8FArAHc6xQOLKIjt5tojc2BNiIY7aKLzz9VSWVvcID7XfWRkdonYQbfBbGShZKdKCxxxxxxxxxxxxxxxxxxxxxxxxxx="
}
},
"status": {
}
}
In this process, if you found it can not fetch ceecertificate, may be you need to exposee the service with kubectl expose service -n kube-system sealed-secrets-controller --type=ClusterIP
Conclusion
Any one of those solutions, You need provide secret manager ability, then add in your deployment or patch it. No matter what about Cloud Security or Cloud Native Security, It necessary to implement the principle of security by default, zero trust and so on. Due to some reason, i was unable to provide screenshots of all experiments. but I hope you to do it by yourself what all the experiments mentioned above.
At the end of this blog, I found a wonderful Chinese translation:
有人住高楼,有人在深沟,有人光万丈,有人一身锈,世人万千种,浮云莫去求,斯人若彩虹,遇上方知有。——《怦然心动》
Some of us get dipped in flat, some in satin, some in gloss. But every once in a while you find someone who’s iridescent, and when you do, nothing will ever compare.