Kubernetes Custom Resource Definitions
Introduction⌗
Custom Resource Definitions (CRDs) are a way to extende Kubernetes objects and use the new object for better management of K8s.
K8s has resources like Pods
and Deployment
that come out of box. When you want specialized management, you want to create your own resources and define how to use them in K8s. This is where CRDs come in. The possibilities are many and you will learn how to get started.
Prerequisites⌗
- Go knowledge. We will be writing golang code. Install the necessary tools to develop in it.
- Kubernetes. We will need a cluster. You can have one in the cloud or use a tool like kind to create a local cluster.
- Docker. Install docker desktop to get K8s out of the box.
Prepare local development environment⌗
Step 1: Verify your cluster exists⌗
You will query the cluster for the nodes using kubectl
command:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-lp03b-20242036-vmss000000 Ready agent 19d v1.28.5
aks-lp03b-20242036-vmss000001 Ready agent 19d v1.28.5
aks-lp03b-20242036-vmss000002 Ready agent 19d v1.28.5
akswp03b000000 Ready agent 19d v1.28.5
akswp03b000001 Ready agent 19d v1.28.5
akswp03b000002 Ready agent 19d v1.28.5
My cluster has 6 nodes, 3 linux and 3 windows.
Step 2: Setup local repo⌗
This repo will have the k8s manifest files for our resources and custom controller code that will handle watching for changes on our resource.
We will create a resource named Hello
that will take a message and output a greeting. We will then update this message using kubectl
and log the change in the custom controller.
Create a project structure like this:
├── LICENSE
├── README.md
├── cmd
│ └── main.go # entry point of the custom controller
├── go.mod
├── manifests # k8s manifest files
│ ├── hello-crd.yml # defines the crd
│ └── hello.yaml # defines the Hello resource
└── pkg # package specific code
└── placeholder.txt
You can find the full project here.
Step 3: Create a namespace⌗
This will enable us to have an easy time cleaning up later.
$ kubectl create namespace localusr-agents
namespace/localusr-agents created
$ kubectl get ns # checks the namespaces
NAME STATUS AGE
kube-public Active 19d
kube-system Active 19d
localusr-agents Active 45s
$ kubectl config set-context --current --namespace=localusr-agents
# this changes to the created namespace
Context "localusr-admin" modified
Another advantage of using namespaces is limiting access to resources using service accounts and generally isolation of processes.
Steps to creating a CRD and a Resource⌗
Step 1: Create a CRD⌗
Open the manifests/hello-crd.yaml
file and we now define our crd as follows:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: hello.localusr.io
spec:
group: localusr.io
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
message:
type: string
scope: Namespaced
names:
plural: hello
singular: hello
kind: Hello
shortNames:
- hey
A few things to note when creating a CRD, the object
and group
are used when listing them i.e. <object>.<group>
. So when we list the crd above, the output will have hello.localusr.io
.
Now, let us apply the CRD:
$ kubectl apply -f manifests/hello-crd.yml
customresourcedefinition.apiextensions.k8s.io/hello.localusr.io created
$ kubectl get crd
NAME CREATED AT
...
hello.localusr.io 2024-04-09T15:43:20Z
...
Now, if we try to get resources of kind Hello
, we should not find any.
$ kubectl get hello
No resources found in localusr-agents namespace.
This is good! All we have to do now is define a Hello
resource and apply it.
Step 2: Define a Hello
resource⌗
Defining a resource for a CRD is done the same way you will do it for an out of the box resource.
Open the file manifests/hello.yaml
and update it like this:
apiVersion: localusr.io/v1
kind: Hello
metadata:
name: greetings
spec:
message: "Hello, World!"
Go ahead and apply it:
$ kubectl apply -f manifests/hello.yaml
hello.localusr.io/greetings created
We can now re-check the resource:
$ kubectl get hello
NAME AGE
greetings 45s
$ kubectl describe hello greetings
Name: greetings
Namespace: localusr-agents
Labels: <none>
Annotations: <none>
API Version: localusr.io/v1
Kind: Hello
Metadata:
Creation Timestamp: 2024-04-09T15:50:05Z
Generation: 1
Resource Version: 8648908
UID: f4474826-2846-478b-ad48-46861e176174
Spec:
Message: Hello, World!
Events: <none>
Perfect! As it is, you can leave it there. However, that was easy so we want to take control of our resource and define how we want to process things when we update values in it, when it is destroyed and many other scenarios that probably got you here.
Steps to creating a custom controller⌗
Install the necessary helper go libraries⌗
We will need common libraries to help us interact with the K8s API as well as our resource.
$ go get k8s.io/apimachinery k8s.io/client-go sigs.k8s.io/controller-runtime
The specific versions will be the latest releases of each library.
The custom controller that we are building will watch for changes in the Hello
resources and log out the message we have in our spec.
Set up the controller⌗
Get the config file file path⌗
Open cmd/main.go
and update the main
function like this:
func main() {
var kubecfg string
// check the home directory exists and fill the path to the .kube/config
if home := homedir.HomeDir(); home != "" {
kubecfg = filepath.Join(home, ".kube", "config")
}
fmt.Println(kubecfg) // should print path to .kube/config from you home dir
}
Running the above snippet should print path to .kube/config from you home dir:
$ go run cmd/main.go
/home/kang/.kube/config
Build the config⌗
To build up the config, add the following lines:
// other code
// build the config from command line arguments
config, err := clientcmd.BuildConfigFromFlags("", kubecfg)
if err != nil {
fmt.Println("Using the config in the cluster")
// an error occured while building from command line arguments. Try building
// from the cluster environment variables and service account token.
config, err = rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
}
These lines of code will first try to build the config from command line flags, if the external .kube/config
file wasn’t found, then it tries to get the configuration from environment variables and the service account token. If all that fails, the program panics as we have no config.
Now that we have a config
, we need to setup a client to interact with the K8s API. We use the client-go/dynamic
to create a dynamic client.
Setup a client⌗
A dynamic client works with any resource type at runtime therefore it makes it a good option for working with CRDs or when we don’t know all the resource types our application will interact with in advance. If we have all that information, we can consider using a static client.
Add these lines of code:
// other lines of code
client, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}
Define the CRD⌗
Here we will use the information in the CRD spec above, where we get the API group and the name of the resource:
// other lines of code
helloResource := schema.GroupVersionResource{Group: "localusr.io", Version: "v1", Resource: "hello"}
Setup an informer⌗
Now, we need to have a loop, that will keep checking on our resource hello
and informing us of any changes. It will also be useful to list resources.
Add these lines:
informer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
return client.Resource(helloResource).Namespace("").List(context.TODO(), options)
},
WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
return client.Resource(helloResource).Namespace("").Watch(context.TODO(), options)
},
},
&unstructured.Unstructured{},
0,
cache.Indexers{},
)
Setup an event handler⌗
We will monitor three events on our resource: Add
, Update
and Delete
.
For some organization, I will put all the event handler code in a new file in pkg/hello/handler.go
. Create that file and add the code:
package hello
import (
"fmt"
"k8s.io/client-go/tools/cache"
)
type HelloEvent interface {
AddEvent() func(obj interface{})
UpdateEvent() func(obj interface{})
DeleteEvent() func(obj interface{})
}
type HelloEventHandler struct{}
func (h HelloEventHandler) NewEvent() cache.ResourceEventHandlerFuncs {
return cache.ResourceEventHandlerFuncs{
AddFunc: h.AddEvent(),
UpdateFunc: h.UpdateEvent(),
DeleteFunc: h.DeleteEvent(),
}
}
func (h HelloEventHandler) AddEvent() func(obj interface{}) {
return func(obevj interface{}) {
fmt.Println("Added a ", obevj)
}
}
func (h HelloEventHandler) UpdateEvent() func(oldobj, newobj interface{}) {
return func(oldobj, newobj interface{}) {
fmt.Println("Updated a ", oldobj, newobj)
}
}
func (h HelloEventHandler) DeleteEvent() func(obj interface{}) {
return func(obevj interface{}) {
fmt.Println("Deleted a ", obevj)
}
}
Now with this code, go back to the cmd/main.go
file and create a HelloEventHandler
which we will use to register an event handler with on the informer.
// other code
helloEventHandler := hello.HelloEventHandler{}
informer.AddEventHandler(helloEventHandler.NewEvent())
Run the informer⌗
Now that everything is set, we are going to run the informer and use channels to communicate any issues or signals.
// other code
stop := make(chan struct{})
defer close(stop)
go informer.Run(stop)
if !cache.WaitForCacheSync(stop, informer.HasSynced) {
panic("Timeout waiting for the cache to sync")
}
fmt.Println("Custom Resource Controller is running")
<-stop
Run the controller⌗
With all that in place, run the controller.
$ go run cmd/main.go
Added a &{map[apiVersion:localusr.io/v1 kind:Hello ...
Custom Resource Controller is running
And we are up!
Running the controller is without making changes to the resources won’t have any visible changes.
Testing out our controller⌗
Update the resource⌗
We will update the manifests/hello.yaml
file. We change the message to "Hello there, updated the resource"
.
apiVersion: localusr.io/v1
kind: Hello
metadata:
name: greetings
spec:
message: "Hello there, updated the resource"
Then run the update in a separate terminal using kubectl
:
$ kubectl apply -f manifests/hello.yaml
hello.localusr.io/greetings configured
Jump onto the terminal where the terminal is running and you will see something like this:
Updated a &{map[apiVersion:localusr.io/v1 kind:Hello ...
The text Updated a
shows us that the Update
event in our pkg/hello/handler.go
file was called.
Delete the resource⌗
The same thing happens when we delete the Hello
resource named greetings
$ kubectl delete hello greetings
hello.localusr.io "greetings" deleted
And in the controller you see the deleted logs:
Deleted a &{map[apiVersion:localusr.io/v1 kind:Hello ...
Cleaning up⌗
We are going to delete the namespace that we created and get back to our initial state.
Deleting the namespace⌗
Run:
$ kubectl config set-context --current --namespace=default
Context "localusr-admin" modified.
$ kubectl delete ns localusr-agents
namespace "localusr-agents" deleted
Deploy my controller?⌗
Yes you can deploy your controller into your cluster. Say for example your controller is used to clear pods that are not assigned, then you want to have your controller in the cluster running all through.
You can containerize your application and deploy it as a pod in your cluster. More on this on another article.
Conclusion⌗
We have seen how to use CRDs to create custom resource definitions, then we have created our own resources and finally stringed up a custom controller that monitors our resources and reacts to how they change.
You can comment on this article by clicking on this link.