-
Notifications
You must be signed in to change notification settings - Fork 1.8k
cannot load sigs.k8s.io/controller-runtime/pkg/log when build image or start use "up local" #1907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If you are using the mod so you don't need run Also, note that the memcached-operator in the samples is not updated. We have the open PR operator-framework/operator-sdk-samples#72 for it. The same regards getting started which you can see the PR operator-framework/getting-started#46. I'd suggest you try to follow up the getting started with the changes performed in the PR. Please, let us know if with this info you were able to solve it. |
@camilamacedo86 Still faces this issue. The steps are nearly the same with those I have listed above, except that I do not execute OK, Let me try to follow the |
Hi @BIAOXYZ, Please, start from scratch and try to follow it here: https://github.com/operator-framework/getting-started/blob/b74062347414966affb5d05b17af2c18f27d3250/README.md Please, let me know if worked for you and in the case of note add the step where you stop and the issue faced. |
@camilamacedo86 Still failed by following your link. Actually, you can reproduce it in kadacoda online environment fastly. Besides, I think the commands in the two tutorials are nearly the same, except that at the very first, whether the initialized memcached-operator dir is in or out of a subdir of $GOPATH/src.
|
HI @BIAOXYZ, The katakana is using a very old version. I'd recommend you check it locally. Following the steps performed by myself locally now with the version Steps:
$ cd memcached-operator
// MemcachedSpec defines the desired state of Memcached
// +k8s:openapi-gen=true
type MemcachedSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file
// Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html
// Size is the size of the memcached deployment
Size int32 `json:"size"`
}
// MemcachedStatus defines the observed state of Memcached
// +k8s:openapi-gen=true
type MemcachedStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file
// Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html
// Nodes are the names of the memcached pods
Nodes []string `json:"nodes"`
}
package memcached
import (
"context"
"reflect"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/labels"
cachev1alpha1 "github.com/example-inc/memcached-operator/pkg/apis/cache/v1alpha1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
"sigs.k8s.io/controller-runtime/pkg/source"
)
var log = logf.Log.WithName("controller_memcached")
/**
* USER ACTION REQUIRED: This is a scaffold file intended for the user to modify with their own Controller
* business logic. Delete these comments after modifying this file.*
*/
// Add creates a new Memcached Controller and adds it to the Manager. The Manager will set fields on the Controller
// and Start it when the Manager is Started.
func Add(mgr manager.Manager) error {
return add(mgr, newReconciler(mgr))
}
// newReconciler returns a new reconcile.Reconciler
func newReconciler(mgr manager.Manager) reconcile.Reconciler {
return &ReconcileMemcached{client: mgr.GetClient(), scheme: mgr.GetScheme()}
}
// add adds a new Controller to mgr with r as the reconcile.Reconciler
func add(mgr manager.Manager, r reconcile.Reconciler) error {
// Create a new controller
c, err := controller.New("memcached-controller", mgr, controller.Options{Reconciler: r})
if err != nil {
return err
}
// Watch for changes to primary resource Memcached
err = c.Watch(&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})
if err != nil {
return err
}
// TODO(user): Modify this to be the types you create that are owned by the primary resource
// Watch for changes to secondary resource Pods and requeue the owner Memcached
err = c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
IsController: true,
OwnerType: &cachev1alpha1.Memcached{},
})
if err != nil {
return err
}
err = c.Watch(&source.Kind{Type: &corev1.Service{}}, &handler.EnqueueRequestForOwner{
IsController: true,
OwnerType: &cachev1alpha1.Memcached{},
})
if err != nil {
return err
}
return nil
}
var _ reconcile.Reconciler = &ReconcileMemcached{}
// ReconcileMemcached reconciles a Memcached object
type ReconcileMemcached struct {
// TODO: Clarify the split client
// This client, initialized using mgr.Client() above, is a split client
// that reads objects from the cache and writes to the apiserver
client client.Client
scheme *runtime.Scheme
}
// Reconcile reads that state of the cluster for a Memcached object and makes changes based on the state read
// and what is in the Memcached.Spec
// TODO(user): Modify this Reconcile function to implement your Controller logic. This example creates
// a Memcached Deployment for each Memcached CR
// Note:
// The Controller will requeue the Request to be processed again if the returned error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) {
reqLogger := log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name)
reqLogger.Info("Reconciling Memcached.")
// Fetch the Memcached instance
memcached := &cachev1alpha1.Memcached{}
err := r.client.Get(context.TODO(), request.NamespacedName, memcached)
if err != nil {
if errors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
// Return and don't requeue
reqLogger.Info("Memcached resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
// Error reading the object - requeue the request.
reqLogger.Error(err, "Failed to get Memcached.")
return reconcile.Result{}, err
}
// Check if the Deployment already exists, if not create a new one
deployment := &appsv1.Deployment{}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, deployment)
if err != nil && errors.IsNotFound(err) {
// Define a new Deployment
dep := r.deploymentForMemcached(memcached)
reqLogger.Info("Creating a new Deployment.", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
err = r.client.Create(context.TODO(), dep)
if err != nil {
reqLogger.Error(err, "Failed to create new Deployment.", "Deployment.Namespace", dep.Namespace, "Deployment.Name", dep.Name)
return reconcile.Result{}, err
}
// Deployment created successfully - return and requeue
// NOTE: that the requeue is made with the purpose to provide the deployment object for the next step to ensure the deployment size is the same as the spec.
// Also, you could GET the deployment object again instead of requeue if you wish. See more over it here: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler
return reconcile.Result{Requeue: true}, nil
} else if err != nil {
reqLogger.Error(err, "Failed to get Deployment.")
return reconcile.Result{}, err
}
// Ensure the deployment size is the same as the spec
size := memcached.Spec.Size
if *deployment.Spec.Replicas != size {
deployment.Spec.Replicas = &size
err = r.client.Update(context.TODO(), deployment)
if err != nil {
reqLogger.Error(err, "Failed to update Deployment.", "Deployment.Namespace", deployment.Namespace, "Deployment.Name", deployment.Name)
return reconcile.Result{}, err
}
}
// Check if the Service already exists, if not create a new one
// NOTE: The Service is used to expose the Deployment. However, the Service is not required at all for the memcached example to work. The purpose is to add more examples of what you can do in your operator project.
service := &corev1.Service{}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, service)
if err != nil && errors.IsNotFound(err) {
// Define a new Service object
ser := r.serviceForMemcached(memcached)
reqLogger.Info("Creating a new Service.", "Service.Namespace", ser.Namespace, "Service.Name", ser.Name)
err = r.client.Create(context.TODO(), ser)
if err != nil {
reqLogger.Error(err, "Failed to create new Service.", "Service.Namespace", ser.Namespace, "Service.Name", ser.Name)
return reconcile.Result{}, err
}
} else if err != nil {
reqLogger.Error(err, "Failed to get Service.")
return reconcile.Result{}, err
}
// Update the Memcached status with the pod names
// List the pods for this memcached's deployment
podList := &corev1.PodList{}
labelSelector := labels.SelectorFromSet(labelsForMemcached(memcached.Name))
listOps := &client.ListOptions{
Namespace: memcached.Namespace,
LabelSelector: labelSelector,
}
err = r.client.List(context.TODO(), listOps, podList)
if err != nil {
reqLogger.Error(err, "Failed to list pods.", "Memcached.Namespace", memcached.Namespace, "Memcached.Name", memcached.Name)
return reconcile.Result{}, err
}
podNames := getPodNames(podList.Items)
// Update status.Nodes if needed
if !reflect.DeepEqual(podNames, memcached.Status.Nodes) {
memcached.Status.Nodes = podNames
err := r.client.Status().Update(context.TODO(), memcached)
if err != nil {
reqLogger.Error(err, "Failed to update Memcached status.")
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
// deploymentForMemcached returns a memcached Deployment object
func (r *ReconcileMemcached) deploymentForMemcached(m *cachev1alpha1.Memcached) *appsv1.Deployment {
ls := labelsForMemcached(m.Name)
replicas := m.Spec.Size
dep := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: "memcached:1.4.36-alpine",
Name: "memcached",
Command: []string{"memcached", "-m=64", "-o", "modern", "-v"},
Ports: []corev1.ContainerPort{{
ContainerPort: 11211,
Name: "memcached",
}},
}},
},
},
},
}
// Set Memcached instance as the owner of the Deployment.
controllerutil.SetControllerReference(m, dep, r.scheme)
return dep
}
// serviceForMemcached function takes in a Memcached object and returns a Service for that object.
func (r *ReconcileMemcached) serviceForMemcached(m *cachev1alpha1.Memcached) *corev1.Service {
ls := labelsForMemcached(m.Name)
ser := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: corev1.ServiceSpec{
Selector: ls,
Ports: []corev1.ServicePort{
{
Port: 11211,
Name: m.Name,
},
},
},
}
// Set Memcached instance as the owner of the Service.
controllerutil.SetControllerReference(m, ser, r.scheme)
return ser
}
// labelsForMemcached returns the labels for selecting the resources
// belonging to the given memcached CR name.
func labelsForMemcached(name string) map[string]string {
return map[string]string{"app": "memcached", "memcached_cr": name}
}
// getPodNames returns the pod names of the array of pods passed in
func getPodNames(pods []corev1.Pod) []string {
var podNames []string
for _, pod := range pods {
podNames = append(podNames, pod.Name)
}
return podNames
}
Be sure that you are logged as system admin in order to be able to apply with success the RBCA files above.
$ operator-sdk up local --namespace=default
INFO[0000] Running the operator locally.
INFO[0000] Using namespace default.
{"level":"info","ts":1568288601.18513,"logger":"cmd","msg":"Go Version: go1.12.7"}
{"level":"info","ts":1568288601.185185,"logger":"cmd","msg":"Go OS/Arch: darwin/amd64"}
{"level":"info","ts":1568288601.185193,"logger":"cmd","msg":"Version of operator-sdk: v0.10.0"}
{"level":"info","ts":1568288601.18738,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1568288601.187408,"logger":"leader","msg":"Skipping leader election; not running in a cluster."}
{"level":"info","ts":1568288601.292104,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1568288601.29224,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"memcached-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1568288601.292341,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"memcached-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1568288601.2923799,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"memcached-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1568288601.292462,"logger":"cmd","msg":"Could not generate and serve custom resource metrics","error":"operator run mode forced to local"}
{"level":"info","ts":1568288601.340925,"logger":"metrics","msg":"Skipping metrics Service creation; not running in a cluster."}
{"level":"info","ts":1568288601.369492,"logger":"cmd","msg":"Could not create ServiceMonitor object","error":"no ServiceMonitor registered with the API"}
{"level":"info","ts":1568288601.3695202,"logger":"cmd","msg":"Install prometheus-operator in your cluster to create ServiceMonitor objects","error":"no ServiceMonitor registered with the API"}
{"level":"info","ts":1568288601.369524,"logger":"cmd","msg":"Starting the Cmd."}
{"level":"info","ts":1568288601.4709032,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"memcached-controller"}
{"level":"info","ts":1568288601.5752,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"memcached-controller","worker count":1}
{"level":"info","ts":1568288601.5760498,"logger":"controller_memcached","msg":"Reconciling Memcached.","Request.Namespace":"default","Request.Name":"example-memcached"} I hope that it helps you with. Please, let us know if you could check it with the info provided. |
Hi camilamacedo86. As you say, I agree that the operator-sdk version in your katacoda link (more specifically, Operator SDK with Go) is very old. But notice that the katacoda link I use is this (Launch Single Node Kubernetes Cluster). The main difference is that, in the second one which only has a k8s bootstrapped by minikube, I can install customized Go, Operator-sdk, etc, all from scratch. I indeed try both locally and in the online environment with release 0.10.0. But I will start to try v0.9.0, thank you~:) BTW, I think there is a typo in your step 3: |
…mbers and methods of client object. (operator-framework#1907)
@camilamacedo86 I have found the reason, it's due to incorrect package path of logf. And I in addition fix some other small error for Please see this PR: #1915 |
Updating the |
Bug Report
What did you do?
Following the official example <<user-guide>> to build the memcached-operator in Go with go module.
What did you expect to see?
Example succeeds.
What did you see instead? Under which circumstances?
First I met this issue and solved it by
but then I met this one.
Environment
Kubernetes cluster kind:
minikube
Are you writing your operator in ansible, helm, or go?
Go
⭐️ : The same issue happened in both my local machine and an online environment provided by katacoda, you can easily reproduce it in this link
To reproduce this issue in katacoda
The text was updated successfully, but these errors were encountered: