The reconciler pattern makes sense in theory. Here’s what it looks like in code.
Estimated Reading Time : 10m
What we’re building
A controller that watches ConfigMaps with a specific label and logs when they’re created, updated, or deleted. Simple enough to understand the wiring, real enough to show the patterns.
Project setup
Initialize the project:
mkdir configmap-controller && cd configmap-controller
go mod init github.com/example/configmap-controller
go get sigs.k8s.io/controller-runtime@latest
The reconciler
package main
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
type ConfigMapReconciler struct {
client.Client
}
func (r *ConfigMapReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
var cm corev1.ConfigMap
if err := r.Get(ctx, req.NamespacedName, &cm); err != nil {
if errors.IsNotFound(err) {
logger.Info("ConfigMap deleted", "name", req.NamespacedName)
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}
logger.Info("Reconciling ConfigMap",
"name", cm.Name,
"namespace", cm.Namespace,
"keys", fmt.Sprintf("%v", keys(cm.Data)),
)
return ctrl.Result{}, nil
}
func keys(m map[string]string) []string {
result := make([]string, 0, len(m))
for k := range m {
result = append(result, k)
}
return result
}
The reconciler embeds client.Client, which gives it Get, List, Create, Update, Delete, and Patch methods for talking to the Kubernetes API.
The manager
The manager is the entry point. It creates the client, cache, and controller, then starts everything:
package main
import (
"os"
corev1 "k8s.io/api/core/v1"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)
func main() {
ctrl.SetLogger(zap.New())
logger := ctrl.Log.WithName("setup")
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{})
if err != nil {
logger.Error(err, "unable to create manager")
os.Exit(1)
}
if err := ctrl.NewControllerManagedBy(mgr).
For(&corev1.ConfigMap{}).
Complete(&ConfigMapReconciler{
Client: mgr.GetClient(),
}); err != nil {
logger.Error(err, "unable to create controller")
os.Exit(1)
}
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
logger.Error(err, "manager exited with error")
os.Exit(1)
}
}
The key line is:
ctrl.NewControllerManagedBy(mgr).
For(&corev1.ConfigMap{}).
Complete(&ConfigMapReconciler{...})
For tells the controller which resource type to watch. Complete wires in your reconciler. controller-runtime handles the watch, the work queue, and the event loop.
Running it
With a kubeconfig pointing at a cluster (or a local kind cluster):
go run .
In another terminal, create a ConfigMap:
kubectl create configmap test-config --from-literal=key1=value1
You’ll see the reconciler log:
INFO Reconciling ConfigMap name=test-config namespace=default keys=[key1]
Update it:
kubectl patch configmap test-config -p '{"data":{"key2":"value2"}}'
INFO Reconciling ConfigMap name=test-config namespace=default keys=[key1 key2]
Delete it:
kubectl delete configmap test-config
INFO ConfigMap deleted name=default/test-config
What controller-runtime does for you
Behind the scenes, For(&corev1.ConfigMap{}) sets up:
- An informer — a cached watch on ConfigMaps that keeps a local copy of all ConfigMaps in the cache
- Event handlers — functions that extract the resource key and enqueue it when something changes
- A work queue — deduplicates events so rapid updates don’t cause redundant reconciles
- Rate limiting — prevents a single broken resource from overwhelming the reconciler
You don’t configure any of this. The defaults are sensible for most controllers.
The cache
controller-runtime’s client reads from a local cache, not directly from the API server. This is important to understand:
r.Get()andr.List()read from the cache (fast, no API call)r.Create(),r.Update(),r.Delete(),r.Patch()go directly to the API server- The cache is populated by informers and is eventually consistent
If you need a guaranteed fresh read (rare), you can use APIReader:
type ConfigMapReconciler struct {
client.Client
APIReader client.Reader
}
// guaranteed fresh read
r.APIReader.Get(ctx, req.NamespacedName, &cm)
But default cache reads are almost always sufficient.
Next steps
This controller watches all ConfigMaps in the cluster. In practice, you’d filter by label, namespace, or use predicates to control which events trigger reconciliation. You’d also define your own Custom Resource Definitions instead of reacting to built-in types.