Docker in Docker enabled, initializing... ================================================================================ Starting Docker: docker. Waiting for docker to be ready, sleeping for 1 seconds. ================================================================================ Done setting up docker in docker. Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com] + WRAPPED_COMMAND_PID=186 + wait 186 + ./hack/jenkins/test-dockerized.sh + export PATH=/home/prow/go/bin:/home/prow/go/src/k8s.io/kubernetes/third_party/etcd:/usr/local/go/bin:/home/prow/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + PATH=/home/prow/go/bin:/home/prow/go/src/k8s.io/kubernetes/third_party/etcd:/usr/local/go/bin:/home/prow/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + export GO111MODULE=off + GO111MODULE=off + pushd ./hack/tools + GO111MODULE=on + go install gotest.tools/gotestsum go: downloading gotest.tools/gotestsum v1.6.4 go: downloading github.com/dnephin/pflag v1.0.7 go: downloading golang.org/x/tools v0.6.0 go: downloading github.com/fatih/color v1.14.1 go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/jonboulle/clockwork v0.2.2 go: downloading golang.org/x/crypto v0.5.0 go: downloading golang.org/x/sync v0.1.0 go: downloading github.com/fsnotify/fsnotify v1.5.4 go: downloading golang.org/x/sys v0.5.0 go: downloading github.com/mattn/go-colorable v0.1.13 go: downloading github.com/mattn/go-isatty v0.0.17 go: downloading golang.org/x/term v0.5.0 go: downloading golang.org/x/mod v0.8.0 + popd + export KUBE_COVER=n + KUBE_COVER=n + export ARTIFACTS=/logs/artifacts + ARTIFACTS=/logs/artifacts + export KUBE_KEEP_VERBOSE_TEST_OUTPUT=y + KUBE_KEEP_VERBOSE_TEST_OUTPUT=y + export KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=4 + KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=4 + export LOG_LEVEL=4 + LOG_LEVEL=4 + cd /home/prow/go/src/k8s.io/kubernetes + ./hack/install-etcd.sh Downloading https://github.com/coreos/etcd/releases/download/v3.5.7/etcd-v3.5.7-linux-amd64.tar.gz succeed etcd v3.5.7 installed. To use: export PATH="/home/prow/go/src/k8s.io/kubernetes/third_party/etcd:${PATH}" + make test-cmd Recording: record_command_canary Running command: record_command_canary +++ Running case: test-cmd.record_command_canary +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: record_command_canary /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 165: bogus-expected-to-fail: command not found !!! [0318 12:46:05] Call tree: !!! [0318 12:46:05] 1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...) !!! [0318 12:46:05] 2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...) !!! [0318 12:46:05] 3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:141 juLog(...) !!! [0318 12:46:05] 4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:169 record_command(...) !!! [0318 12:46:05] 5: hack/make-rules/test-cmd.sh:35 source(...) +++ exit code: 1 +++ error: 1 +++ [0318 12:46:05] Running kubeadm tests go version go1.20.2 linux/amd64 +++ [0318 12:46:09] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kubeadm (static) go version go1.20.2 linux/amd64 +++ [0318 12:47:02] Running tests without code coverage {"Time":"2023-03-18T12:47:38.824722074Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t27.879s\n"} ✓ cmd/kubeadm/test/cmd (27.883s) DONE 60 tests in 0.002s +++ [0318 12:47:38] prune-junit-xml not found; installing from hack/tools processing junit xml file : /logs/artifacts/junit_20230318-124702.xml done. +++ [0318 12:47:39] Saved JUnit XML test report to /logs/artifacts/junit_20230318-124702.xml +++ [0318 12:47:39] Running kubectl tests for kube-apiserver etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.4jlR1cAn8D --listen-client-urls http://127.0.0.1:2379 --log-level=warn 2> "/logs/artifacts/etcd.8ae82a97-c58a-11ed-8f15-da574695a788.root.log.DEBUG.20230318-124739.1580" >/dev/null Waiting for etcd to come up. +++ [0318 12:47:40] On try 2, etcd: : {"health":"true","reason":""} {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}+++ [0318 12:47:40] Building kubectl go version go1.20.2 linux/amd64 +++ [0318 12:47:40] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kubectl (static) k8s.io/kubernetes/cmd/kubectl-convert (static) +++ [0318 12:47:59] Running kubectl with no options kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). User "test-admin" set. Cluster "local" set. Context "test" created. Switched to context "test". apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://127.0.0.1:6443 name: local contexts: - context: cluster: local user: test-admin name: test current-context: test kind: Config preferences: {} users: - name: test-admin user: token: REDACTED +++ [0318 12:47:59] Setup complete +++ [0318 12:47:59] Building kube-apiserver go version go1.20.2 linux/amd64 +++ [0318 12:48:00] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kube-apiserver (static) +++ [0318 12:49:24] Starting kube-apiserver I0318 12:49:25.648265 19996 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0318 12:49:25.648370 19996 server.go:551] external host was not specified, using 10.33.29.5 W0318 12:49:25.648383 19996 authentication.go:520] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer I0318 12:49:25.650566 19996 server.go:165] Version: v1.27.0-beta.0.26+7a1ef208ec9c49 I0318 12:49:25.650660 19996 server.go:167] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" W0318 12:49:25.960150 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.960175 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.960202 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.961169 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.961191 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.961199 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.961205 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.961219 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.978475 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.978542 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.979194 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.979249 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.979291 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.979393 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.979450 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:25.979530 19996 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,RuntimeClass,DefaultIngressClass. I0318 12:49:25.979542 19996 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ResourceQuota. W0318 12:49:25.979733 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:49:25.979774 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:25.989149 19996 handler.go:165] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager W0318 12:49:25.989183 19996 genericapiserver.go:752] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. W0318 12:49:25.989396 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:25.990377 19996 instance.go:282] Using reconciler: lease I0318 12:49:26.039909 19996 handler.go:165] Adding GroupVersion v1 to ResourceManager I0318 12:49:26.040321 19996 instance.go:651] API group "internal.apiserver.k8s.io" is not enabled, skipping. I0318 12:49:26.084738 19996 instance.go:651] API group "resource.k8s.io" is not enabled, skipping. I0318 12:49:26.100859 19996 handler.go:165] Adding GroupVersion authentication.k8s.io v1 to ResourceManager W0318 12:49:26.100896 19996 genericapiserver.go:752] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.100903 19996 genericapiserver.go:752] Skipping API authentication.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.103273 19996 handler.go:165] Adding GroupVersion authorization.k8s.io v1 to ResourceManager W0318 12:49:26.103302 19996 genericapiserver.go:752] Skipping API authorization.k8s.io/v1beta1 because it has no resources. I0318 12:49:26.106091 19996 handler.go:165] Adding GroupVersion autoscaling v2 to ResourceManager I0318 12:49:26.106843 19996 handler.go:165] Adding GroupVersion autoscaling v1 to ResourceManager W0318 12:49:26.106869 19996 genericapiserver.go:752] Skipping API autoscaling/v2beta1 because it has no resources. W0318 12:49:26.106875 19996 genericapiserver.go:752] Skipping API autoscaling/v2beta2 because it has no resources. I0318 12:49:26.110089 19996 handler.go:165] Adding GroupVersion batch v1 to ResourceManager W0318 12:49:26.110124 19996 genericapiserver.go:752] Skipping API batch/v1beta1 because it has no resources. I0318 12:49:26.112996 19996 handler.go:165] Adding GroupVersion certificates.k8s.io v1 to ResourceManager W0318 12:49:26.113023 19996 genericapiserver.go:752] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.113030 19996 genericapiserver.go:752] Skipping API certificates.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.115377 19996 handler.go:165] Adding GroupVersion coordination.k8s.io v1 to ResourceManager W0318 12:49:26.115403 19996 genericapiserver.go:752] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.117255 19996 genericapiserver.go:752] Skipping API discovery.k8s.io/v1beta1 because it has no resources. I0318 12:49:26.117890 19996 handler.go:165] Adding GroupVersion discovery.k8s.io v1 to ResourceManager I0318 12:49:26.121565 19996 handler.go:165] Adding GroupVersion networking.k8s.io v1 to ResourceManager W0318 12:49:26.121595 19996 genericapiserver.go:752] Skipping API networking.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.121601 19996 genericapiserver.go:752] Skipping API networking.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.124005 19996 handler.go:165] Adding GroupVersion node.k8s.io v1 to ResourceManager W0318 12:49:26.124034 19996 genericapiserver.go:752] Skipping API node.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.124040 19996 genericapiserver.go:752] Skipping API node.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.126849 19996 handler.go:165] Adding GroupVersion policy v1 to ResourceManager W0318 12:49:26.126878 19996 genericapiserver.go:752] Skipping API policy/v1beta1 because it has no resources. I0318 12:49:26.130649 19996 handler.go:165] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager W0318 12:49:26.130677 19996 genericapiserver.go:752] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.130685 19996 genericapiserver.go:752] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.133424 19996 handler.go:165] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager W0318 12:49:26.133461 19996 genericapiserver.go:752] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.133470 19996 genericapiserver.go:752] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.137512 19996 handler.go:165] Adding GroupVersion storage.k8s.io v1 to ResourceManager W0318 12:49:26.137546 19996 genericapiserver.go:752] Skipping API storage.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.137552 19996 genericapiserver.go:752] Skipping API storage.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.140698 19996 handler.go:165] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager I0318 12:49:26.142148 19996 handler.go:165] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager W0318 12:49:26.142175 19996 genericapiserver.go:752] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.142180 19996 genericapiserver.go:752] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.148176 19996 handler.go:165] Adding GroupVersion apps v1 to ResourceManager W0318 12:49:26.148206 19996 genericapiserver.go:752] Skipping API apps/v1beta2 because it has no resources. W0318 12:49:26.148212 19996 genericapiserver.go:752] Skipping API apps/v1beta1 because it has no resources. I0318 12:49:26.151107 19996 handler.go:165] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager W0318 12:49:26.151133 19996 genericapiserver.go:752] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.151139 19996 genericapiserver.go:752] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. I0318 12:49:26.153777 19996 handler.go:165] Adding GroupVersion events.k8s.io v1 to ResourceManager W0318 12:49:26.153810 19996 genericapiserver.go:752] Skipping API events.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.155834 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:26.158339 19996 handler.go:165] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager W0318 12:49:26.158364 19996 genericapiserver.go:752] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. W0318 12:49:26.158926 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:26.776842 19996 secure_serving.go:210] Serving securely on 127.0.0.1:6443 I0318 12:49:26.777307 19996 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::hack/testdata/ca/ca.crt" I0318 12:49:26.777562 19996 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" I0318 12:49:26.777769 19996 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0318 12:49:26.778760 19996 apiservice_controller.go:97] Starting APIServiceRegistrationController I0318 12:49:26.778782 19996 controller.go:80] Starting OpenAPI V3 AggregationController I0318 12:49:26.778812 19996 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0318 12:49:26.778905 19996 apf_controller.go:361] Starting API Priority and Fairness config controller W0318 12:49:26.779268 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:26.779471 19996 gc_controller.go:78] Starting apiserver lease garbage collector W0318 12:49:26.779343 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:26.779692 19996 controller.go:83] Starting OpenAPI AggregationController I0318 12:49:26.779783 19996 autoregister_controller.go:141] Starting autoregister controller I0318 12:49:26.779974 19996 cache.go:32] Waiting for caches to sync for autoregister controller I0318 12:49:26.780429 19996 customresource_discovery_controller.go:288] Starting DiscoveryController I0318 12:49:26.780554 19996 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::hack/testdata/ca/ca.crt" I0318 12:49:26.781430 19996 available_controller.go:494] Starting AvailableConditionController I0318 12:49:26.781912 19996 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0318 12:49:26.781745 19996 system_namespaces_controller.go:67] Starting system namespaces controller W0318 12:49:26.781752 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:26.782260 19996 gc_controller.go:78] Starting apiserver lease garbage collector I0318 12:49:26.781789 19996 handler_discovery.go:392] Starting ResourceDiscoveryManager W0318 12:49:26.782867 19996 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:49:26.783043 19996 controller.go:121] Starting legacy_token_tracking_controller I0318 12:49:26.783067 19996 shared_informer.go:311] Waiting for caches to sync for configmaps I0318 12:49:26.783143 19996 crdregistration_controller.go:111] Starting crd-autoregister controller I0318 12:49:26.783157 19996 shared_informer.go:311] Waiting for caches to sync for crd-autoregister I0318 12:49:26.779981 19996 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0318 12:49:26.783175 19996 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller I0318 12:49:26.783218 19996 controller.go:85] Starting OpenAPI controller I0318 12:49:26.783253 19996 controller.go:85] Starting OpenAPI V3 controller I0318 12:49:26.783279 19996 naming_controller.go:291] Starting NamingConditionController I0318 12:49:26.783305 19996 establishing_controller.go:76] Starting EstablishingController I0318 12:49:26.783329 19996 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0318 12:49:26.783348 19996 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0318 12:49:26.783366 19996 crd_finalizer.go:266] Starting CRDFinalizer E0318 12:49:26.857607 19996 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms" I0318 12:49:26.879747 19996 apf_controller.go:366] Running API Priority and Fairness config worker I0318 12:49:26.879781 19996 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process I0318 12:49:26.879981 19996 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0318 12:49:26.880214 19996 cache.go:39] Caches are synced for autoregister controller I0318 12:49:26.882095 19996 cache.go:39] Caches are synced for AvailableConditionController controller I0318 12:49:26.883118 19996 shared_informer.go:318] Caches are synced for configmaps I0318 12:49:26.883322 19996 shared_informer.go:318] Caches are synced for crd-autoregister I0318 12:49:26.883352 19996 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller I0318 12:49:26.883872 19996 controller.go:624] quota admission added evaluator for: namespaces I0318 12:49:27.068579 19996 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io I0318 12:49:27.506873 19996 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0318 12:49:27.795746 19996 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0318 12:49:27.808894 19996 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0318 12:49:27.808925 19996 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0318 12:49:29.235481 19996 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0318 12:49:29.349470 19996 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0318 12:49:29.527973 19996 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.0.0.1] W0318 12:49:29.572159 19996 lease.go:251] Resetting endpoints for master service "kubernetes" to [10.33.29.5] I0318 12:49:29.573386 19996 controller.go:624] quota admission added evaluator for: endpoints I0318 12:49:29.585848 19996 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io +++ [0318 12:49:29] On try 5, apiserver: ok +++ [0318 12:49:29] Building kube-controller-manager go version go1.20.2 linux/amd64 +++ [0318 12:49:30] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kube-controller-manager (static) +++ [0318 12:50:07] Generate kubeconfig for controller-manager +++ [0318 12:50:07] Starting controller-manager I0318 12:50:08.353673 23056 serving.go:348] Generated self-signed cert in-memory W0318 12:50:08.747709 23056 authentication.go:426] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W0318 12:50:08.747752 23056 authentication.go:320] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. W0318 12:50:08.747761 23056 authentication.go:344] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. W0318 12:50:08.747775 23056 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W0318 12:50:08.747785 23056 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. I0318 12:50:08.748267 23056 controllermanager.go:187] "Starting" version="v1.27.0-beta.0.26+7a1ef208ec9c49" I0318 12:50:08.748307 23056 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0318 12:50:08.750410 23056 secure_serving.go:210] Serving securely on [::]:10257 I0318 12:50:08.750551 23056 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0318 12:50:08.750806 23056 leaderelection.go:245] attempting to acquire leader lease kube-system/kube-controller-manager... +++ [0318 12:50:08] On try 2, controller-manager: ok I0318 12:50:08.772344 23056 leaderelection.go:255] successfully acquired lease kube-system/kube-controller-manager I0318 12:50:08.772520 23056 event.go:307] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="8ae82a97-c58a-11ed-8f15-da574695a788_9275f895-47b9-485c-9895-98ac9c1b7894 became leader" I0318 12:50:08.780330 23056 controllermanager.go:661] "Controller is disabled because there is no private key" controller="serviceaccount-token" W0318 12:50:08.780868 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.784487 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.784561 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.784591 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.784648 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling" W0318 12:50:08.784680 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.784708 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="jobs.batch" W0318 12:50:08.784799 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.784833 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io" W0318 12:50:08.784860 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.784881 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.784909 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.784962 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="endpoints" W0318 12:50:08.784997 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785022 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="statefulsets.apps" W0318 12:50:08.785060 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785094 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io" W0318 12:50:08.785119 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785149 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="limitranges" W0318 12:50:08.785170 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785192 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="daemonsets.apps" W0318 12:50:08.785205 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785229 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io" W0318 12:50:08.785248 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785275 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io" W0318 12:50:08.785310 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785336 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="serviceaccounts" W0318 12:50:08.785365 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785395 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="deployments.apps" W0318 12:50:08.785425 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785454 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="cronjobs.batch" W0318 12:50:08.785533 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785601 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io" W0318 12:50:08.785668 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785701 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy" W0318 12:50:08.785727 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.785765 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785787 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="replicasets.apps" W0318 12:50:08.785808 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.785835 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io" W0318 12:50:08.785918 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.785993 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.786075 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="podtemplates" W0318 12:50:08.786131 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.786174 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps" W0318 12:50:08.786236 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.786300 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io" I0318 12:50:08.786384 23056 resource_quota_controller.go:295] "Starting resource quota controller" I0318 12:50:08.786410 23056 shared_informer.go:311] Waiting for caches to sync for resource quota I0318 12:50:08.786410 23056 controllermanager.go:638] "Started controller" controller="resourcequota" I0318 12:50:08.786473 23056 resource_quota_monitor.go:304] "QuotaMonitor running" W0318 12:50:08.786794 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.786853 23056 controllermanager.go:638] "Started controller" controller="daemonset" I0318 12:50:08.787011 23056 daemon_controller.go:289] "Starting daemon sets controller" I0318 12:50:08.787031 23056 shared_informer.go:311] Waiting for caches to sync for daemon sets I0318 12:50:08.787203 23056 controllermanager.go:638] "Started controller" controller="cronjob" I0318 12:50:08.787225 23056 controllermanager.go:616] "Warning: skipping controller" controller="nodeipam" I0318 12:50:08.787368 23056 cronjob_controllerv2.go:139] "Starting cronjob controller v2" I0318 12:50:08.787387 23056 shared_informer.go:311] Waiting for caches to sync for cronjob W0318 12:50:08.787744 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.787829 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.787916 23056 controllermanager.go:638] "Started controller" controller="persistentvolume-binder" I0318 12:50:08.788263 23056 controllermanager.go:638] "Started controller" controller="pv-protection" I0318 12:50:08.788328 23056 pv_controller_base.go:323] "Starting persistent volume controller" I0318 12:50:08.788331 23056 pv_protection_controller.go:78] "Starting PV protection controller" I0318 12:50:08.788353 23056 shared_informer.go:311] Waiting for caches to sync for PV protection I0318 12:50:08.788343 23056 shared_informer.go:311] Waiting for caches to sync for persistent volume I0318 12:50:08.788769 23056 controllermanager.go:638] "Started controller" controller="endpointslice" I0318 12:50:08.788918 23056 endpointslice_controller.go:252] Starting endpoint slice controller I0318 12:50:08.788937 23056 shared_informer.go:311] Waiting for caches to sync for endpoint_slice I0318 12:50:08.789260 23056 controllermanager.go:638] "Started controller" controller="endpointslicemirroring" I0318 12:50:08.789473 23056 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I0318 12:50:08.789493 23056 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring I0318 12:50:08.789612 23056 controllermanager.go:638] "Started controller" controller="statefulset" I0318 12:50:08.789722 23056 stateful_set.go:161] "Starting stateful set controller" I0318 12:50:08.789740 23056 shared_informer.go:311] Waiting for caches to sync for stateful set W0318 12:50:08.789973 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.790000 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.790014 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.790129 23056 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. I0318 12:50:08.790741 23056 controllermanager.go:638] "Started controller" controller="attachdetach" I0318 12:50:08.790927 23056 attach_detach_controller.go:343] "Starting attach detach controller" I0318 12:50:08.790949 23056 shared_informer.go:311] Waiting for caches to sync for attach detach I0318 12:50:08.791057 23056 controllermanager.go:638] "Started controller" controller="replicaset" I0318 12:50:08.791264 23056 replica_set.go:201] "Starting controller" name="replicaset" I0318 12:50:08.791289 23056 shared_informer.go:311] Waiting for caches to sync for ReplicaSet I0318 12:50:08.791626 23056 controllermanager.go:638] "Started controller" controller="horizontalpodautoscaling" I0318 12:50:08.791695 23056 horizontal.go:200] "Starting HPA controller" I0318 12:50:08.791711 23056 shared_informer.go:311] Waiting for caches to sync for HPA I0318 12:50:08.791988 23056 controllermanager.go:638] "Started controller" controller="disruption" I0318 12:50:08.792007 23056 controllermanager.go:603] "Warning: controller is disabled" controller="tokencleaner" I0318 12:50:08.792047 23056 disruption.go:423] Sending events to api server. I0318 12:50:08.792111 23056 disruption.go:434] Starting disruption controller I0318 12:50:08.792120 23056 shared_informer.go:311] Waiting for caches to sync for disruption E0318 12:50:08.792390 23056 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" I0318 12:50:08.792423 23056 controllermanager.go:616] "Warning: skipping controller" controller="service" I0318 12:50:08.792681 23056 controllermanager.go:638] "Started controller" controller="endpoint" W0318 12:50:08.792845 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.792853 23056 endpoints_controller.go:172] Starting endpoint controller I0318 12:50:08.792866 23056 shared_informer.go:311] Waiting for caches to sync for endpoint I0318 12:50:08.792894 23056 controllermanager.go:638] "Started controller" controller="clusterrole-aggregation" I0318 12:50:08.793023 23056 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" I0318 12:50:08.793047 23056 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator I0318 12:50:08.793181 23056 controllermanager.go:638] "Started controller" controller="deployment" I0318 12:50:08.793447 23056 deployment_controller.go:168] "Starting controller" controller="deployment" I0318 12:50:08.793470 23056 shared_informer.go:311] Waiting for caches to sync for deployment W0318 12:50:08.793545 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.793684 23056 controllermanager.go:638] "Started controller" controller="csrapproving" I0318 12:50:08.793901 23056 controllermanager.go:638] "Started controller" controller="csrcleaner" I0318 12:50:08.793963 23056 core.go:224] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true I0318 12:50:08.793995 23056 controllermanager.go:616] "Warning: skipping controller" controller="route" I0318 12:50:08.794062 23056 certificate_controller.go:112] Starting certificate controller "csrapproving" I0318 12:50:08.794095 23056 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving I0318 12:50:08.794200 23056 cleaner.go:82] Starting CSR cleaner controller I0318 12:50:08.794396 23056 controllermanager.go:638] "Started controller" controller="persistentvolume-expander" I0318 12:50:08.794456 23056 expand_controller.go:339] "Starting expand controller" I0318 12:50:08.794472 23056 shared_informer.go:311] Waiting for caches to sync for expand I0318 12:50:08.794780 23056 controllermanager.go:638] "Started controller" controller="replicationcontroller" I0318 12:50:08.794835 23056 replica_set.go:201] "Starting controller" name="replicationcontroller" I0318 12:50:08.794849 23056 shared_informer.go:311] Waiting for caches to sync for ReplicationController I0318 12:50:08.795013 23056 controllermanager.go:638] "Started controller" controller="podgc" I0318 12:50:08.795146 23056 gc_controller.go:103] Starting GC controller I0318 12:50:08.795174 23056 shared_informer.go:311] Waiting for caches to sync for GC W0318 12:50:08.798090 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.798146 23056 controllermanager.go:638] "Started controller" controller="namespace" I0318 12:50:08.798277 23056 namespace_controller.go:197] "Starting namespace controller" I0318 12:50:08.798305 23056 shared_informer.go:311] Waiting for caches to sync for namespace I0318 12:50:08.798502 23056 controllermanager.go:638] "Started controller" controller="job" I0318 12:50:08.798543 23056 controllermanager.go:603] "Warning: controller is disabled" controller="bootstrapsigner" I0318 12:50:08.798718 23056 job_controller.go:202] Starting job controller I0318 12:50:08.798737 23056 shared_informer.go:311] Waiting for caches to sync for job I0318 12:50:08.798835 23056 controllermanager.go:638] "Started controller" controller="pvc-protection" I0318 12:50:08.799011 23056 pvc_protection_controller.go:102] "Starting PVC protection controller" I0318 12:50:08.799034 23056 shared_informer.go:311] Waiting for caches to sync for PVC protection I0318 12:50:08.799181 23056 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher" I0318 12:50:08.799330 23056 publisher.go:101] Starting root CA certificate configmap publisher I0318 12:50:08.799356 23056 shared_informer.go:311] Waiting for caches to sync for crt configmap I0318 12:50:08.799738 23056 garbagecollector.go:155] "Starting controller" controller="garbagecollector" I0318 12:50:08.799758 23056 shared_informer.go:311] Waiting for caches to sync for garbage collector I0318 12:50:08.799810 23056 controllermanager.go:638] "Started controller" controller="garbagecollector" I0318 12:50:08.799906 23056 graph_builder.go:294] "Running" component="GraphBuilder" I0318 12:50:08.800224 23056 node_lifecycle_controller.go:431] "Controller will reconcile labels" I0318 12:50:08.800266 23056 controllermanager.go:638] "Started controller" controller="nodelifecycle" I0318 12:50:08.800452 23056 node_lifecycle_controller.go:465] "Sending events to api server" I0318 12:50:08.800533 23056 node_lifecycle_controller.go:476] "Starting node controller" I0318 12:50:08.800543 23056 shared_informer.go:311] Waiting for caches to sync for taint I0318 12:50:08.800555 23056 controllermanager.go:638] "Started controller" controller="ephemeral-volume" I0318 12:50:08.800621 23056 controller.go:169] "Starting ephemeral volume controller" I0318 12:50:08.800633 23056 shared_informer.go:311] Waiting for caches to sync for ephemeral I0318 12:50:08.800787 23056 controllermanager.go:638] "Started controller" controller="serviceaccount" I0318 12:50:08.800918 23056 serviceaccounts_controller.go:111] "Starting service account controller" I0318 12:50:08.800941 23056 shared_informer.go:311] Waiting for caches to sync for service account I0318 12:50:08.804308 23056 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving" I0318 12:50:08.804341 23056 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0318 12:50:08.804404 23056 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0318 12:50:08.806651 23056 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client" I0318 12:50:08.806677 23056 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0318 12:50:08.806705 23056 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0318 12:50:08.808133 23056 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client" I0318 12:50:08.808155 23056 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0318 12:50:08.808182 23056 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0318 12:50:08.809560 23056 controllermanager.go:638] "Started controller" controller="csrsigning" I0318 12:50:08.809595 23056 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown" I0318 12:50:08.809614 23056 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0318 12:50:08.809659 23056 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0318 12:50:08.809867 23056 controllermanager.go:638] "Started controller" controller="ttl" I0318 12:50:08.809909 23056 ttl_controller.go:124] "Starting TTL controller" I0318 12:50:08.809924 23056 shared_informer.go:311] Waiting for caches to sync for TTL E0318 12:50:08.810087 23056 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" I0318 12:50:08.810117 23056 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle" I0318 12:50:08.810404 23056 controllermanager.go:638] "Started controller" controller="ttl-after-finished" I0318 12:50:08.810469 23056 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" I0318 12:50:08.810639 23056 shared_informer.go:311] Waiting for caches to sync for TTL after finished I0318 12:50:08.813436 23056 shared_informer.go:311] Waiting for caches to sync for resource quota W0318 12:50:08.827531 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.827853 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.827959 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.828271 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.828434 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.828655 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.828778 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.828900 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.828962 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0318 12:50:08.829020 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:50:08.829466 23056 shared_informer.go:311] Waiting for caches to sync for garbage collector I0318 12:50:08.887887 23056 shared_informer.go:318] Caches are synced for cronjob I0318 12:50:08.889140 23056 shared_informer.go:318] Caches are synced for PV protection I0318 12:50:08.890372 23056 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring I0318 12:50:08.891654 23056 shared_informer.go:318] Caches are synced for ReplicaSet I0318 12:50:08.891800 23056 shared_informer.go:318] Caches are synced for HPA I0318 12:50:08.893000 23056 shared_informer.go:318] Caches are synced for endpoint I0318 12:50:08.893100 23056 shared_informer.go:318] Caches are synced for ClusterRoleAggregator I0318 12:50:08.894149 23056 shared_informer.go:318] Caches are synced for certificate-csrapproving I0318 12:50:08.895152 23056 shared_informer.go:318] Caches are synced for ReplicationController I0318 12:50:08.895246 23056 shared_informer.go:318] Caches are synced for expand I0318 12:50:08.899855 23056 shared_informer.go:318] Caches are synced for crt configmap I0318 12:50:08.899899 23056 shared_informer.go:318] Caches are synced for job I0318 12:50:08.899921 23056 shared_informer.go:318] Caches are synced for PVC protection I0318 12:50:08.901039 23056 shared_informer.go:318] Caches are synced for ephemeral I0318 12:50:08.904444 23056 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving I0318 12:50:08.906733 23056 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client I0318 12:50:08.908326 23056 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client I0318 12:50:08.910650 23056 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown I0318 12:50:08.910772 23056 shared_informer.go:318] Caches are synced for TTL after finished I0318 12:50:08.989160 23056 shared_informer.go:318] Caches are synced for endpoint_slice I0318 12:50:08.991546 23056 shared_informer.go:318] Caches are synced for attach detach I0318 12:50:08.992820 23056 shared_informer.go:318] Caches are synced for disruption I0318 12:50:08.994055 23056 shared_informer.go:318] Caches are synced for deployment I0318 12:50:08.995290 23056 shared_informer.go:318] Caches are synced for GC I0318 12:50:09.000828 23056 shared_informer.go:318] Caches are synced for taint I0318 12:50:09.000891 23056 taint_manager.go:206] "Starting NoExecuteTaintManager" I0318 12:50:09.000993 23056 taint_manager.go:211] "Sending events to api server" I0318 12:50:09.010280 23056 shared_informer.go:318] Caches are synced for TTL I0318 12:50:09.089002 23056 shared_informer.go:318] Caches are synced for persistent volume I0318 12:50:09.098420 23056 shared_informer.go:318] Caches are synced for namespace I0318 12:50:09.102116 23056 shared_informer.go:318] Caches are synced for service account I0318 12:50:09.103649 19996 controller.go:624] quota admission added evaluator for: serviceaccounts I0318 12:50:09.187158 23056 shared_informer.go:318] Caches are synced for daemon sets I0318 12:50:09.187349 23056 shared_informer.go:318] Caches are synced for resource quota I0318 12:50:09.190404 23056 shared_informer.go:318] Caches are synced for stateful set I0318 12:50:09.214278 23056 shared_informer.go:318] Caches are synced for resource quota node/127.0.0.1 created I0318 12:50:09.455069 23056 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"127.0.0.1\" does not exist" +++ [0318 12:50:09] Checking kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} I0318 12:50:09.530047 23056 shared_informer.go:318] Caches are synced for garbage collector I0318 12:50:09.600494 23056 shared_informer.go:318] Caches are synced for garbage collector I0318 12:50:09.600537 23056 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage" The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 40s Recording: run_kubectl_version_tests Running command: run_kubectl_version_tests +++ Running case: test-cmd.run_kubectl_version_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_version_tests +++ [0318 12:50:09] Testing kubectl version { "major": "1", "minor": "27+", "gitVersion": "v1.27.0-beta.0.26+7a1ef208ec9c49", "gitCommit": "7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", "gitTreeState": "clean", "buildDate": "2023-03-17T23:59:16Z", "goVersion": "go1.20.2", "compiler": "gc", "platform": "linux/amd64" }WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. +++ [0318 12:50:10] Testing kubectl version: check client only output matches expected output WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful: the flag '--client' shows correct client info (BSuccessful: the flag '--client' correctly has no server version info (B+++ [0318 12:50:10] Testing kubectl version: verify json output Successful: --output json has correct client info (BSuccessful: --output json has correct server info (B+++ [0318 12:50:10] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion Successful: --client --output json has correct client info (BSuccessful: --client --output json has no server info (B+++ [0318 12:50:10] Testing kubectl version: compare json output using additional --short flag Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Successful: --short --output client json info is equal to non short result (BSuccessful: --short --output server json info is equal to non short result (B+++ [0318 12:50:10] Testing kubectl version: compare json output with yaml output Successful: --output json/yaml has identical information (B+++ [0318 12:50:10] Testing kubectl version: contains semantic version of embedded kustomize WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} has not:Kustomize Version\: unknown Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} has:Kustomize Version\: v[[:digit:]][[:digit:]]*\.[[:digit:]][[:digit:]]*\.[[:digit:]][[:digit:]]* +++ [0318 12:50:10] Testing kubectl version: all output formats include kustomize version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 has:Kustomize Version Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Successful (Bmessage:Client Version: v1.27.0-beta.0.26+7a1ef208ec9c49 Kustomize Version: v5.0.1 Server Version: v1.27.0-beta.0.26+7a1ef208ec9c49 has:Kustomize Version Successful (Bmessage:clientVersion: buildDate: "2023-03-17T23:59:16Z" compiler: gc gitCommit: 7a1ef208ec9c49b5ef89572c80995de7f0dd91d7 gitTreeState: clean gitVersion: v1.27.0-beta.0.26+7a1ef208ec9c49 goVersion: go1.20.2 major: "1" minor: 27+ platform: linux/amd64 kustomizeVersion: v5.0.1 serverVersion: buildDate: "2023-03-17T23:59:16Z" compiler: gc gitCommit: 7a1ef208ec9c49b5ef89572c80995de7f0dd91d7 gitTreeState: clean gitVersion: v1.27.0-beta.0.26+7a1ef208ec9c49 goVersion: go1.20.2 major: "1" minor: 27+ platform: linux/amd64 has:kustomizeVersion Successful (Bmessage:{ "clientVersion": { "major": "1", "minor": "27+", "gitVersion": "v1.27.0-beta.0.26+7a1ef208ec9c49", "gitCommit": "7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", "gitTreeState": "clean", "buildDate": "2023-03-17T23:59:16Z", "goVersion": "go1.20.2", "compiler": "gc", "platform": "linux/amd64" }, "kustomizeVersion": "v5.0.1", "serverVersion": { "major": "1", "minor": "27+", "gitVersion": "v1.27.0-beta.0.26+7a1ef208ec9c49", "gitCommit": "7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", "gitTreeState": "clean", "buildDate": "2023-03-17T23:59:16Z", "goVersion": "go1.20.2", "compiler": "gc", "platform": "linux/amd64" } } has:kustomizeVersion +++ exit code: 0 Recording: run_kubectl_results_tests Running command: run_kubectl_results_tests +++ Running case: test-cmd.run_kubectl_results_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_results_tests +++ [0318 12:50:11] Testing kubectl result output Successful: stdout for kubectl list (BSuccessful: stderr for kubectl list (Bresults.sh:45: Successful: kubectl list (BSuccessful: stdout for kubectl get pod/no-such-pod (BSuccessful: stderr for kubectl get pod/no-such-pod (Bresults.sh:54: Successful: kubectl get pod/no-such-pod (B+++ exit code: 0 Recording: run_kubectl_config_set_tests Running command: run_kubectl_config_set_tests +++ Running case: test-cmd.run_kubectl_config_set_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_config_set_tests +++ [0318 12:50:11] Creating namespace namespace-1679143811-30125 namespace/namespace-1679143811-30125 created Context "test" modified. +++ [0318 12:50:11] Testing kubectl(v1:config set) Cluster "test-cluster" set. Property "clusters.test-cluster.certificate-authority-data" set. Property "clusters.test-cluster.certificate-authority-data" set. +++ exit code: 0 Recording: run_kubectl_config_set_cluster_tests Running command: run_kubectl_config_set_cluster_tests +++ Running case: test-cmd.run_kubectl_config_set_cluster_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_config_set_cluster_tests +++ [0318 12:50:11] Creating namespace namespace-1679143811-20240 namespace/namespace-1679143811-20240 created Context "test" modified. +++ [0318 12:50:11] Testing kubectl config set-cluster Cluster "test-cluster-1" set. Cluster "test-cluster-2" set. Cluster "test-cluster-3" set. +++ exit code: 0 Recording: run_kubectl_config_set_credentials_tests Running command: run_kubectl_config_set_credentials_tests +++ Running case: test-cmd.run_kubectl_config_set_credentials_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_config_set_credentials_tests +++ [0318 12:50:12] Creating namespace namespace-1679143812-6002 namespace/namespace-1679143812-6002 created Context "test" modified. +++ [0318 12:50:12] Testing kubectl config set-credentials User "user1" set. User "user2" set. User "user3" set. +++ exit code: 0 Recording: run_kubectl_local_proxy_tests Running command: run_kubectl_local_proxy_tests +++ Running case: test-cmd.run_kubectl_local_proxy_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_local_proxy_tests +++ [0318 12:50:13] Testing kubectl local proxy +++ [0318 12:50:13] Starting kubectl proxy on random port; output file in proxy-port.out.FOyCy; args: +++ [0318 12:50:13] Attempt 0 to read proxy-port.out.FOyCy... +++ [0318 12:50:13] kubectl proxy running on port 35019 +++ [0318 12:50:13] On try 1, kubectl proxy: ok +++ [0318 12:50:13] Stopping proxy on port 35019 /home/prow/go/src/k8s.io/kubernetes/hack/lib/logging.sh: line 166: 24078 Killed kubectl proxy --port=0 --www=. > "${PROXY_PORT_FILE}" 2>&1 +++ [0318 12:50:13] Starting kubectl proxy on random port; output file in proxy-port.out.JF8UF; args: I0318 12:50:14.001379 23056 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone="" I0318 12:50:14.001567 23056 node_lifecycle_controller.go:1027] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" I0318 12:50:14.001634 23056 event.go:307] "Event occurred" object="127.0.0.1" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller" +++ [0318 12:50:14] Attempt 0 to read proxy-port.out.JF8UF... +++ [0318 12:50:14] kubectl proxy running on port 43305 +++ [0318 12:50:14] On try 1, kubectl proxy: ok +++ [0318 12:50:14] Stopping proxy on port 43305 /home/prow/go/src/k8s.io/kubernetes/hack/lib/logging.sh: line 166: 24116 Killed kubectl proxy --port=0 --www=. > "${PROXY_PORT_FILE}" 2>&1 +++ [0318 12:50:14] Starting kubectl proxy on random port; output file in proxy-port.out.t0wif; args: /custom +++ [0318 12:50:14] Attempt 0 to read proxy-port.out.t0wif... +++ [0318 12:50:14] kubectl proxy running on port 42477 +++ [0318 12:50:14] On try 1, kubectl proxy --api-prefix=/custom: Moved Permanently. +++ [0318 12:50:14] Stopping proxy on port 42477 +++ exit code: 0 Recording: run_RESTMapper_evaluation_tests Running command: run_RESTMapper_evaluation_tests +++ Running case: test-cmd.run_RESTMapper_evaluation_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_RESTMapper_evaluation_tests +++ [0318 12:50:14] Creating namespace namespace-1679143814-29476 namespace/namespace-1679143814-29476 created Context "test" modified. +++ [0318 12:50:15] Testing RESTMapper +++ [0318 12:50:15] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype" +++ exit code: 0 NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta3 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta3 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment configmap/kube-root-ca.crt serviceaccount/default Recording: run_clusterroles_tests Running command: run_clusterroles_tests +++ Running case: test-cmd.run_clusterroles_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_clusterroles_tests +++ [0318 12:50:17] Creating namespace namespace-1679143817-32530 namespace/namespace-1679143817-32530 created Context "test" modified. +++ [0318 12:50:17] Testing clusterroles rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin (Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin (BSuccessful (Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run) clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run) Successful (Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found clusterrole.rbac.authorization.k8s.io/pod-admin created rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *: (BSuccessful (Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "pod-admin" deleted has:Warning: deleting cluster-scoped resources Successful (Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "pod-admin" deleted has:clusterrole.rbac.authorization.k8s.io "pod-admin" deleted clusterrole.rbac.authorization.k8s.io/pod-admin created rbac.sh:48: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *: (Brbac.sh:49: Successful get clusterrole/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:50: Successful get clusterrole/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Bclusterrole.rbac.authorization.k8s.io/resource-reader created rbac.sh:52: Successful get clusterrole/resource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:get:list: (Brbac.sh:53: Successful get clusterrole/resource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:deployments: (Brbac.sh:54: Successful get clusterrole/resource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :apps: (Bclusterrole.rbac.authorization.k8s.io/resourcename-reader created rbac.sh:56: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:57: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:58: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Brbac.sh:59: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resourceNames}}{{.}}:{{end}}{{end}}: foo: (Bclusterrole.rbac.authorization.k8s.io/url-reader created rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get: (Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*: (Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader (BSuccessful (Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run) clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run) Successful (Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found clusterrolebinding.rbac.authorization.k8s.io/super-admin created rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin: (Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run) clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run) rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin: (Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated rbac.sh:82: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo: (Bclusterrolebinding.rbac.authorization.k8s.io/multi-users created rbac.sh:84: Successful get clusterrolebinding/multi-users {{range.subjects}}{{.name}}:{{end}}: user-1:user-2: (Bclusterrolebinding.rbac.authorization.k8s.io/super-group created rbac.sh:87: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group: (Bclusterrolebinding.rbac.authorization.k8s.io/super-group subjects updated rbac.sh:89: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo: (Bclusterrolebinding.rbac.authorization.k8s.io/multi-groups created rbac.sh:91: Successful get clusterrolebinding/multi-groups {{range.subjects}}{{.name}}:{{end}}: group-1:group-2: (Bclusterrolebinding.rbac.authorization.k8s.io/super-sa created rbac.sh:94: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.namespace}}:{{end}}: otherns: (Brbac.sh:95: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name: (Bclusterrolebinding.rbac.authorization.k8s.io/super-sa subjects updated rbac.sh:97: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.namespace}}:{{end}}: otherns:otherfoo: (Brbac.sh:98: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo: (Bclusterrolebinding.rbac.authorization.k8s.io/cluster-admin subjects updated clusterrolebinding.rbac.authorization.k8s.io/multi-groups subjects updated clusterrolebinding.rbac.authorization.k8s.io/multi-users subjects updated clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated clusterrolebinding.rbac.authorization.k8s.io/super-group subjects updated clusterrolebinding.rbac.authorization.k8s.io/super-sa subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:basic-user subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:discovery subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:monitoring subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:node subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:service-account-issuer-discovery subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler subjects updated rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user: (Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user: (Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user: (Brolebinding.rbac.authorization.k8s.io/admin created (dry run) rolebinding.rbac.authorization.k8s.io/admin created (server dry run) Successful (Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found has: not found rolebinding.rbac.authorization.k8s.io/admin created rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole (Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin: (Brolebinding.rbac.authorization.k8s.io/admin subjects updated rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo: (Brolebinding.rbac.authorization.k8s.io/localrole created rbac.sh:119: Successful get rolebinding/localrole {{.roleRef.kind}}: Role (Brbac.sh:120: Successful get rolebinding/localrole {{range.subjects}}{{.name}}:{{end}}: the-group: (Brolebinding.rbac.authorization.k8s.io/localrole subjects updated rbac.sh:122: Successful get rolebinding/localrole {{range.subjects}}{{.name}}:{{end}}: the-group:foo: (Brolebinding.rbac.authorization.k8s.io/sarole created rbac.sh:125: Successful get rolebinding/sarole {{range.subjects}}{{.namespace}}:{{end}}: otherns: (Brbac.sh:126: Successful get rolebinding/sarole {{range.subjects}}{{.name}}:{{end}}: sa-name: (Brolebinding.rbac.authorization.k8s.io/sarole subjects updated rbac.sh:128: Successful get rolebinding/sarole {{range.subjects}}{{.namespace}}:{{end}}: otherns:otherfoo: (Brbac.sh:129: Successful get rolebinding/sarole {{range.subjects}}{{.name}}:{{end}}: sa-name:foo: (Brolebinding.rbac.authorization.k8s.io/admin subjects updated rolebinding.rbac.authorization.k8s.io/localrole subjects updated rolebinding.rbac.authorization.k8s.io/sarole subjects updated rbac.sh:133: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:test-all-user: (Brbac.sh:134: Successful get rolebinding/localrole {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user: (Brbac.sh:135: Successful get rolebinding/sarole {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user: (Bquery for clusterrolebindings had limit param query for clusterrolebindings had user-specified limit param Successful describe clusterrolebindings verbose logs: I0318 12:50:23.146650 25754 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:23.151631 25754 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:50:23.159599 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500 200 OK in 4 milliseconds I0318 12:50:23.172825 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin 200 OK in 1 milliseconds I0318 12:50:23.174722 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multi-groups 200 OK in 1 milliseconds I0318 12:50:23.176684 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multi-users 200 OK in 1 milliseconds I0318 12:50:23.179077 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/super-admin 200 OK in 1 milliseconds I0318 12:50:23.180879 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/super-group 200 OK in 1 milliseconds I0318 12:50:23.182787 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/super-sa 200 OK in 1 milliseconds I0318 12:50:23.184602 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user 200 OK in 1 milliseconds I0318 12:50:23.186331 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller 200 OK in 1 milliseconds I0318 12:50:23.188188 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller 200 OK in 1 milliseconds I0318 12:50:23.189901 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller 200 OK in 1 milliseconds I0318 12:50:23.191698 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller 200 OK in 1 milliseconds I0318 12:50:23.193559 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller 200 OK in 1 milliseconds I0318 12:50:23.195203 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller 200 OK in 1 milliseconds I0318 12:50:23.196798 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller 200 OK in 1 milliseconds I0318 12:50:23.198416 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller 200 OK in 1 milliseconds I0318 12:50:23.200068 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller 200 OK in 1 milliseconds I0318 12:50:23.201776 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslicemirroring-controller 200 OK in 1 milliseconds I0318 12:50:23.203337 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ephemeral-volume-controller 200 OK in 1 milliseconds I0318 12:50:23.204888 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller 200 OK in 1 milliseconds I0318 12:50:23.206472 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector 200 OK in 1 milliseconds I0318 12:50:23.208178 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler 200 OK in 1 milliseconds I0318 12:50:23.209801 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller 200 OK in 1 milliseconds I0318 12:50:23.211449 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller 200 OK in 1 milliseconds I0318 12:50:23.213062 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller 200 OK in 1 milliseconds I0318 12:50:23.214714 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder 200 OK in 1 milliseconds I0318 12:50:23.218008 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector 200 OK in 2 milliseconds I0318 12:50:23.219704 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller 200 OK in 1 milliseconds I0318 12:50:23.221351 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller 200 OK in 1 milliseconds I0318 12:50:23.222974 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller 200 OK in 1 milliseconds I0318 12:50:23.224638 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller 200 OK in 1 milliseconds I0318 12:50:23.226172 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller 200 OK in 1 milliseconds I0318 12:50:23.228793 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:root-ca-cert-publisher 200 OK in 1 milliseconds I0318 12:50:23.230375 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller 200 OK in 1 milliseconds I0318 12:50:23.232026 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller 200 OK in 1 milliseconds I0318 12:50:23.233686 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller 200 OK in 1 milliseconds I0318 12:50:23.235441 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller 200 OK in 1 milliseconds I0318 12:50:23.237060 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-after-finished-controller 200 OK in 1 milliseconds I0318 12:50:23.238727 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller 200 OK in 1 milliseconds I0318 12:50:23.240345 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery 200 OK in 1 milliseconds I0318 12:50:23.242087 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager 200 OK in 1 milliseconds I0318 12:50:23.243680 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns 200 OK in 1 milliseconds I0318 12:50:23.245228 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler 200 OK in 1 milliseconds I0318 12:50:23.246811 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:monitoring 200 OK in 1 milliseconds I0318 12:50:23.248309 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node 200 OK in 1 milliseconds I0318 12:50:23.249995 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier 200 OK in 1 milliseconds I0318 12:50:23.251470 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer 200 OK in 1 milliseconds I0318 12:50:23.253024 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:service-account-issuer-discovery 200 OK in 1 milliseconds I0318 12:50:23.254557 25754 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler 200 OK in 1 milliseconds (Bquery for clusterroles had limit param query for clusterroles had user-specified limit param Successful describe clusterroles verbose logs: I0318 12:50:23.472500 25778 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:23.478276 25778 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0318 12:50:23.490014 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500 200 OK in 7 milliseconds I0318 12:50:23.508105 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/admin 200 OK in 1 milliseconds I0318 12:50:23.513956 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/aggregation-reader 200 OK in 1 milliseconds I0318 12:50:23.515842 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin 200 OK in 1 milliseconds I0318 12:50:23.518383 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/edit 200 OK in 1 milliseconds I0318 12:50:23.523687 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/pod-admin 200 OK in 1 milliseconds I0318 12:50:23.525361 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/resource-reader 200 OK in 1 milliseconds I0318 12:50:23.526963 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/resourcename-reader 200 OK in 1 milliseconds I0318 12:50:23.528609 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin 200 OK in 1 milliseconds I0318 12:50:23.530348 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit 200 OK in 1 milliseconds I0318 12:50:23.533544 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view 200 OK in 1 milliseconds I0318 12:50:23.536887 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator 200 OK in 1 milliseconds I0318 12:50:23.538360 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user 200 OK in 1 milliseconds I0318 12:50:23.540383 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient 200 OK in 1 milliseconds I0318 12:50:23.542240 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 200 OK in 1 milliseconds I0318 12:50:23.543997 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver 200 OK in 1 milliseconds I0318 12:50:23.545732 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 200 OK in 1 milliseconds I0318 12:50:23.547408 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver 200 OK in 1 milliseconds I0318 12:50:23.549057 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver 200 OK in 1 milliseconds I0318 12:50:23.550663 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller 200 OK in 1 milliseconds I0318 12:50:23.552414 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller 200 OK in 1 milliseconds I0318 12:50:23.554214 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller 200 OK in 1 milliseconds I0318 12:50:23.555743 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller 200 OK in 1 milliseconds I0318 12:50:23.557734 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller 200 OK in 1 milliseconds I0318 12:50:23.559621 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller 200 OK in 1 milliseconds I0318 12:50:23.561391 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller 200 OK in 1 milliseconds I0318 12:50:23.563268 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller 200 OK in 1 milliseconds I0318 12:50:23.564842 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller 200 OK in 1 milliseconds I0318 12:50:23.566642 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslicemirroring-controller 200 OK in 1 milliseconds I0318 12:50:23.568333 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ephemeral-volume-controller 200 OK in 1 milliseconds I0318 12:50:23.570063 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller 200 OK in 1 milliseconds I0318 12:50:23.571851 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector 200 OK in 1 milliseconds I0318 12:50:23.573485 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler 200 OK in 1 milliseconds I0318 12:50:23.575133 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller 200 OK in 1 milliseconds I0318 12:50:23.576746 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller 200 OK in 1 milliseconds I0318 12:50:23.578658 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller 200 OK in 1 milliseconds I0318 12:50:23.580624 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder 200 OK in 1 milliseconds I0318 12:50:23.582876 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector 200 OK in 1 milliseconds I0318 12:50:23.585634 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller 200 OK in 1 milliseconds I0318 12:50:23.587144 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller 200 OK in 1 milliseconds I0318 12:50:23.588764 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller 200 OK in 1 milliseconds I0318 12:50:23.590492 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller 200 OK in 1 milliseconds I0318 12:50:23.592278 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller 200 OK in 1 milliseconds I0318 12:50:23.593808 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:root-ca-cert-publisher 200 OK in 1 milliseconds I0318 12:50:23.595317 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller 200 OK in 1 milliseconds I0318 12:50:23.597089 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller 200 OK in 1 milliseconds I0318 12:50:23.598870 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller 200 OK in 1 milliseconds I0318 12:50:23.600585 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller 200 OK in 1 milliseconds I0318 12:50:23.602504 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-after-finished-controller 200 OK in 1 milliseconds I0318 12:50:23.604063 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller 200 OK in 1 milliseconds I0318 12:50:23.605404 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery 200 OK in 0 milliseconds I0318 12:50:23.607087 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster 200 OK in 1 milliseconds I0318 12:50:23.608727 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator 200 OK in 1 milliseconds I0318 12:50:23.610264 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager 200 OK in 1 milliseconds I0318 12:50:23.612090 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns 200 OK in 1 milliseconds I0318 12:50:23.613684 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler 200 OK in 1 milliseconds I0318 12:50:23.615823 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin 200 OK in 1 milliseconds I0318 12:50:23.617244 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:monitoring 200 OK in 0 milliseconds I0318 12:50:23.618895 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node 200 OK in 1 milliseconds I0318 12:50:23.621200 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper 200 OK in 1 milliseconds I0318 12:50:23.622769 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector 200 OK in 1 milliseconds I0318 12:50:23.624898 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier 200 OK in 1 milliseconds I0318 12:50:23.626621 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner 200 OK in 1 milliseconds I0318 12:50:23.628320 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer 200 OK in 1 milliseconds I0318 12:50:23.629953 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:service-account-issuer-discovery 200 OK in 1 milliseconds I0318 12:50:23.631825 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler 200 OK in 1 milliseconds I0318 12:50:23.633620 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/url-reader 200 OK in 1 milliseconds I0318 12:50:23.635304 25778 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/view 200 OK in 1 milliseconds (B+++ exit code: 0 Recording: run_role_tests Running command: run_role_tests +++ Running case: test-cmd.run_role_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_role_tests +++ [0318 12:50:23] Creating namespace namespace-1679143823-31568 namespace/namespace-1679143823-31568 created Context "test" modified. +++ [0318 12:50:24] Testing role role.rbac.authorization.k8s.io/pod-admin created (dry run) role.rbac.authorization.k8s.io/pod-admin created (server dry run) Successful (Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found has: not found role.rbac.authorization.k8s.io/pod-admin created rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *: (Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (BSuccessful (Bmessage:the server doesn't have a resource type "invalid-resource" has:the server doesn't have a resource type "invalid-resource" role.rbac.authorization.k8s.io/group-reader created rbac.sh:166: Successful get role/group-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:167: Successful get role/group-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: deployments: (Brbac.sh:168: Successful get role/group-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: apps: (BSuccessful (Bmessage:the server doesn't have a resource type "deployments" in group "invalid-group" has:the server doesn't have a resource type "deployments" in group "invalid-group" role.rbac.authorization.k8s.io/subresource-reader created rbac.sh:173: Successful get role/subresource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:174: Successful get role/subresource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods/status: (Brbac.sh:175: Successful get role/subresource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Brole.rbac.authorization.k8s.io/group-subresource-reader created rbac.sh:178: Successful get role/group-subresource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:179: Successful get role/group-subresource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: replicasets/scale: (Brbac.sh:180: Successful get role/group-subresource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: apps: (BSuccessful (Bmessage:the server doesn't have a resource type "rs" in group "invalid-group" has:the server doesn't have a resource type "rs" in group "invalid-group" role.rbac.authorization.k8s.io/resourcename-reader created rbac.sh:185: Successful get role/resourcename-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:186: Successful get role/resourcename-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:187: Successful get role/resourcename-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Brbac.sh:188: Successful get role/resourcename-reader {{range.rules}}{{range.resourceNames}}{{.}}:{{end}}{{end}}: foo: (Brole.rbac.authorization.k8s.io/resource-reader created rbac.sh:191: Successful get role/resource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:get:list: (Brbac.sh:192: Successful get role/resource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods/status:deployments: (Brbac.sh:193: Successful get role/resource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :apps: (Bquery for roles had limit param query for roles had user-specified limit param Successful describe roles verbose logs: I0318 12:50:26.199089 26325 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:26.205298 26325 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0318 12:50:26.211466 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles?limit=500 200 OK in 1 milliseconds I0318 12:50:26.214403 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles/group-reader 200 OK in 1 milliseconds I0318 12:50:26.216493 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles/group-subresource-reader 200 OK in 1 milliseconds I0318 12:50:26.218341 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles/pod-admin 200 OK in 1 milliseconds I0318 12:50:26.219915 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles/resource-reader 200 OK in 1 milliseconds I0318 12:50:26.221458 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles/resourcename-reader 200 OK in 1 milliseconds I0318 12:50:26.223023 26325 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/roles/subresource-reader 200 OK in 1 milliseconds (Bquery for rolebindings had limit param query for rolebindings had user-specified limit param Successful describe rolebindings verbose logs: I0318 12:50:26.342707 26349 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:26.347841 26349 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:50:26.353411 26349 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1679143823-31568/rolebindings?limit=500 200 OK in 1 milliseconds No resources found in namespace-1679143823-31568 namespace. (B+++ exit code: 0 Recording: run_assert_short_name_tests Running command: run_assert_short_name_tests +++ Running case: test-cmd.run_assert_short_name_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_assert_short_name_tests +++ [0318 12:50:26] Creating namespace namespace-1679143826-1679 namespace/namespace-1679143826-1679 created Context "test" modified. +++ [0318 12:50:26] Testing assert short name +++ [0318 12:50:26] Testing propagation of short names for resources Successful (Bmessage:{"kind":"APIResourceList","groupVersion":"v1","resources":[{"name":"bindings","singularName":"binding","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"componentstatuses","singularName":"componentstatus","namespaced":false,"kind":"ComponentStatus","verbs":["get","list"],"shortNames":["cs"]},{"name":"configmaps","singularName":"configmap","namespaced":true,"kind":"ConfigMap","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["cm"],"storageVersionHash":"qFsyl6wFWjQ="},{"name":"endpoints","singularName":"endpoints","namespaced":true,"kind":"Endpoints","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ep"],"storageVersionHash":"fWeeMqaN/OA="},{"name":"events","singularName":"event","namespaced":true,"kind":"Event","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ev"],"storageVersionHash":"r2yiGXH7wu8="},{"name":"limitranges","singularName":"limitrange","namespaced":true,"kind":"LimitRange","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["limits"],"storageVersionHash":"EBKMFVe6cwo="},{"name":"namespaces","singularName":"namespace","namespaced":false,"kind":"Namespace","verbs":["create","delete","get","list","patch","update","watch"],"shortNames":["ns"],"storageVersionHash":"Q3oi5N2YM8M="},{"name":"namespaces/finalize","singularName":"","namespaced":false,"kind":"Namespace","verbs":["update"]},{"name":"namespaces/status","singularName":"","namespaced":false,"kind":"Namespace","verbs":["get","patch","update"]},{"name":"nodes","singularName":"node","namespaced":false,"kind":"Node","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["no"],"storageVersionHash":"XwShjMxG9Fs="},{"name":"nodes/proxy","singularName":"","namespaced":false,"kind":"NodeProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"nodes/status","singularName":"","namespaced":false,"kind":"Node","verbs":["get","patch","update"]},{"name":"persistentvolumeclaims","singularName":"persistentvolumeclaim","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pvc"],"storageVersionHash":"QWTyNDq0dC4="},{"name":"persistentvolumeclaims/status","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["get","patch","update"]},{"name":"persistentvolumes","singularName":"persistentvolume","namespaced":false,"kind":"PersistentVolume","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pv"],"storageVersionHash":"HN/zwEC+JgM="},{"name":"persistentvolumes/status","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["get","patch","update"]},{"name":"pods","singularName":"pod","namespaced":true,"kind":"Pod","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["po"],"categories":["all"],"storageVersionHash":"xPOwRZ+Yhw8="},{"name":"pods/attach","singularName":"","namespaced":true,"kind":"PodAttachOptions","verbs":["create","get"]},{"name":"pods/binding","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"pods/ephemeralcontainers","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"pods/eviction","singularName":"","namespaced":true,"group":"policy","version":"v1","kind":"Eviction","verbs":["create"]},{"name":"pods/exec","singularName":"","namespaced":true,"kind":"PodExecOptions","verbs":["create","get"]},{"name":"pods/log","singularName":"","namespaced":true,"kind":"Pod","verbs":["get"]},{"name":"pods/portforward","singularName":"","namespaced":true,"kind":"PodPortForwardOptions","verbs":["create","get"]},{"name":"pods/proxy","singularName":"","namespaced":true,"kind":"PodProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"pods/status","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"podtemplates","singularName":"podtemplate","namespaced":true,"kind":"PodTemplate","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"LIXB2x4IFpk="},{"name":"replicationcontrollers","singularName":"replicationcontroller","namespaced":true,"kind":"ReplicationController","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["rc"],"categories":["all"],"storageVersionHash":"Jond2If31h0="},{"name":"replicationcontrollers/scale","singularName":"","namespaced":true,"group":"autoscaling","version":"v1","kind":"Scale","verbs":["get","patch","update"]},{"name":"replicationcontrollers/status","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["get","patch","update"]},{"name":"resourcequotas","singularName":"resourcequota","namespaced":true,"kind":"ResourceQuota","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["quota"],"storageVersionHash":"8uhSgffRX6w="},{"name":"resourcequotas/status","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["get","patch","update"]},{"name":"secrets","singularName":"secret","namespaced":true,"kind":"Secret","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"S6u1pOWzb84="},{"name":"serviceaccounts","singularName":"serviceaccount","namespaced":true,"kind":"ServiceAccount","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["sa"],"storageVersionHash":"pbx9ZvyFpBE="},{"name":"serviceaccounts/token","singularName":"","namespaced":true,"group":"authentication.k8s.io","version":"v1","kind":"TokenRequest","verbs":["create"]},{"name":"services","singularName":"service","namespaced":true,"kind":"Service","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["svc"],"categories":["all"],"storageVersionHash":"0/CO1lhkEBI="},{"name":"services/proxy","singularName":"","namespaced":true,"kind":"ServiceProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"services/status","singularName":"","namespaced":true,"kind":"Service","verbs":["get","patch","update"]}]} has:{"name":"configmaps","singularName":"configmap","namespaced":true,"kind":"ConfigMap","verbs":\["create","delete","deletecollection","get","list","patch","update","watch"\],"shortNames":\["cm"\],"storageVersionHash": No resources found in namespace-1679143826-1679 namespace. Successful (Bmessage: has not:test-crd-example customresourcedefinition.apiextensions.k8s.io/examples.test.com created I0318 12:50:26.973376 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager discovery.sh:94: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name "examples.test.com"}}{{.metadata.name}}:{{end}}{{end}}: examples.test.com: (BI0318 12:50:29.221904 19996 controller.go:624] quota admission added evaluator for: examples.test.com example.test.com/test-crd-example created discovery.sh:106: Successful get examples {{range.items}}{{.metadata.name}}:{{end}}: test-crd-example: (BSuccessful (Bmessage:NAME AGE test-crd-example 0s has:test-crd-example No resources found in namespace-1679143826-1679 namespace. Successful (Bmessage: has not:test-crd-example NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta3 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta3 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment examples pod test.com/v1 true Example Successful (Bmessage:NAME AGE test-crd-example 0s has:test-crd-example No resources found in namespace-1679143826-1679 namespace. Successful (Bmessage: has not:test-crd-example example.test.com "test-crd-example" deleted I0318 12:50:29.777159 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted I0318 12:50:29.787873 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager +++ exit code: 0 Recording: run_assert_singular_name_tests Running command: run_assert_singular_name_tests +++ Running case: test-cmd.run_assert_singular_name_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_assert_singular_name_tests +++ [0318 12:50:29] Creating namespace namespace-1679143829-13516 namespace/namespace-1679143829-13516 created Context "test" modified. +++ [0318 12:50:30] Testing assert singular name No resources found in namespace-1679143829-13516 namespace. Successful (Bmessage: has not:test-crd-example customresourcedefinition.apiextensions.k8s.io/examples.test.com created I0318 12:50:30.745902 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager discovery.sh:172: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name "examples.test.com"}}{{.metadata.name}}:{{end}}{{end}}: examples.test.com: (Bexample.test.com/test-crd-example created discovery.sh:184: Successful get examples {{range.items}}{{.metadata.name}}:{{end}}: test-crd-example: (BSuccessful (Bmessage:NAME AGE test-crd-example 0s has:test-crd-example No resources found in namespace-1679143829-13516 namespace. Successful (Bmessage: has not:test-crd-example NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta3 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta3 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment examples test.com/v1 true Example Successful (Bmessage:NAME AGE test-crd-example 0s has:test-crd-example No resources found in namespace-1679143829-13516 namespace. Successful (Bmessage: has not:test-crd-example example.test.com "test-crd-example" deleted I0318 12:50:33.904176 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted I0318 12:50:33.915870 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager +++ exit code: 0 Recording: run_assert_categories_tests Running command: run_assert_categories_tests +++ Running case: test-cmd.run_assert_categories_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_assert_categories_tests +++ [0318 12:50:34] Testing propagation of categories for resources Successful (Bmessage:"name":"pods","singularName":"pod","namespaced":true,"kind":"Pod","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["po"],"categories":["all"],"storageVersionHash":"xPOwRZ+Yhw8="} has:"categories":\["all"\] +++ exit code: 0 Recording: run_pod_tests Running command: run_pod_tests +++ Running case: test-cmd.run_pod_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_pod_tests +++ [0318 12:50:34] Testing kubectl(v1:pods) +++ [0318 12:50:34] Creating namespace namespace-1679143834-32323 namespace/namespace-1679143834-32323 created Context "test" modified. core.sh:76: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2023-03-18T12:50:34Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1679143834-32323", "resourceVersion": "365", "uid": "1455c41f-d932-4d83-a715-af762aad16ee" }, "spec": { "containers": [ { "image": "registry.k8s.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } core.sh:81: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:82: Successful get pod valid-pod {{.metadata.name}}: valid-pod (Bcore.sh:83: Successful get pod/valid-pod {{.metadata.name}}: valid-pod (Bcore.sh:84: Successful get pods/valid-pod {{.metadata.name}}: valid-pod (BSuccessful (Bmessage:kubectl-create has:kubectl-create core.sh:89: Successful get pods {.items[*].metadata.name}: valid-pod (Bcore.sh:90: Successful get pod valid-pod {.metadata.name}: valid-pod (Bcore.sh:91: Successful get pod/valid-pod {.metadata.name}: valid-pod (Bcore.sh:92: Successful get pods/valid-pod {.metadata.name}: valid-pod (Bmatched Name: matched Image: matched Node: matched Labels: matched Status: core.sh:94: Successful describe pods valid-pod: Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (Bcore.sh:96: Successful describe Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (B core.sh:98: Successful describe Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: (B core.sh:100: Successful describe Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (B matched Name: matched Image: matched Node: matched Labels: matched Status: Successful describe pods: Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (BSuccessful describe Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (BSuccessful describe Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: (BSuccessful describe Name: valid-pod Namespace: namespace-1679143834-32323 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: registry.k8s.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (Bquery for pods had limit param query for events had limit param query for pods had user-specified limit param Successful describe pods verbose logs: I0318 12:50:36.211665 27287 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:36.216333 27287 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:50:36.221466 27287 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143834-32323/pods?limit=500 200 OK in 1 milliseconds I0318 12:50:36.224473 27287 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143834-32323/pods/valid-pod 200 OK in 1 milliseconds I0318 12:50:36.227218 27287 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143834-32323/events?fieldSelector=involvedObject.name%3Dvalid-pod%2CinvolvedObject.namespace%3Dnamespace-1679143834-32323%2CinvolvedObject.uid%3D1455c41f-d932-4d83-a715-af762aad16ee&limit=500 200 OK in 1 milliseconds (Bcore.sh:118: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:122: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:127: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bpod "valid-pod" deleted core.sh:131: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:136: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bpod "valid-pod" deleted core.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:50:37] Creating namespace namespace-1679143837-15778 namespace/namespace-1679143837-15778 created Context "test" modified. core.sh:145: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:149: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:50:38] Creating namespace namespace-1679143838-25271 namespace/namespace-1679143838-25271 created Context "test" modified. core.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:166: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:170: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:174: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:50:39] Creating namespace namespace-1679143839-21860 namespace/namespace-1679143839-21860 created Context "test" modified. core.sh:179: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:183: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:valid-pod Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:valid-pod core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Berror: resource(s) were provided, but no name was specified core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Berror: setting 'all' parameter but found a non empty selector. core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:214: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-kubectl-describe-pod" }}found{{end}}{{end}}:: : (Bnamespace/test-kubectl-describe-pod created core.sh:223: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod (Bcore.sh:227: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created (dry run) secret/test-secret created (server dry run) core.sh:231: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:235: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-secret (Bcore.sh:236: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.type}}: test-type (Bcore.sh:241: Successful get configmaps --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name "test-configmap" }}found{{end}}{{end}}:: : (Bconfigmap/test-configmap created core.sh:247: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap (Bcore.sh:251: Successful get pdb --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name "test-pdb-1" }}found{{end}}{{end}}:: : (Bpoddisruptionbudget.policy/test-pdb-1 created (dry run) I0318 12:50:41.901130 19996 controller.go:624] quota admission added evaluator for: poddisruptionbudgets.policy poddisruptionbudget.policy/test-pdb-1 created (server dry run) core.sh:255: Successful get pdb --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name "test-pdb-1" }}found{{end}}{{end}}:: : (Bpoddisruptionbudget.policy/test-pdb-1 created core.sh:259: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2 (Bpoddisruptionbudget.policy/test-pdb-2 created core.sh:263: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50% (Bquery for poddisruptionbudgets had limit param query for events had limit param query for poddisruptionbudgets had user-specified limit param Successful describe poddisruptionbudgets verbose logs: I0318 12:50:42.313563 28207 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:42.318562 28207 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:50:42.324010 28207 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets?limit=500 200 OK in 1 milliseconds I0318 12:50:42.327363 28207 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-1 200 OK in 1 milliseconds I0318 12:50:42.329417 28207 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D35a3571b-ef7f-4301-b9ca-c3609adc1143%2CinvolvedObject.name%3Dtest-pdb-1%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod&limit=500 200 OK in 1 milliseconds I0318 12:50:42.331283 28207 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds I0318 12:50:42.332818 28207 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.uid%3D5db6c3a7-db19-44b2-b5dd-6b3725a36221%2CinvolvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget&limit=500 200 OK in 1 milliseconds (Bpoddisruptionbudget.policy/test-pdb-3 created core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2 (Bpoddisruptionbudget.policy/test-pdb-4 created core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50% (Berror: min-available and max-unavailable cannot be both specified core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/env-test-pod created matched TEST_CMD_1 matched matched TEST_CMD_2 matched matched TEST_CMD_3 matched env-test-pod (v1:metadata.name) core.sh:284: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod: Name: env-test-pod Namespace: test-kubectl-describe-pod Priority: 0 Node: Labels: Annotations: Status: Pending IP: IPs: Containers: test-container: Image: registry.k8s.io/busybox Port: Host Port: Command: /bin/sh -c env Environment: TEST_CMD_1: Optional: false TEST_CMD_2: Optional: false TEST_CMD_3: env-test-pod (v1:metadata.name) Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: (Bmatched TEST_CMD_1 matched matched TEST_CMD_2 matched matched TEST_CMD_3 matched env-test-pod (v1:metadata.name) Successful describe pods --namespace=test-kubectl-describe-pod: Name: env-test-pod Namespace: test-kubectl-describe-pod Priority: 0 Node: Labels: Annotations: Status: Pending IP: IPs: Containers: test-container: Image: registry.k8s.io/busybox Port: Host Port: Command: /bin/sh -c env Environment: TEST_CMD_1: Optional: false TEST_CMD_2: Optional: false TEST_CMD_3: env-test-pod (v1:metadata.name) Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: (Bpod "env-test-pod" deleted secret "test-secret" deleted configmap "test-configmap" deleted poddisruptionbudget.policy "test-pdb-1" deleted poddisruptionbudget.policy "test-pdb-2" deleted poddisruptionbudget.policy "test-pdb-3" deleted poddisruptionbudget.policy "test-pdb-4" deleted namespace "test-kubectl-describe-pod" deleted core.sh:296: Successful get priorityclasses {{range.items}}{{ if eq .metadata.name "test-priorityclass" }}found{{end}}{{end}}:: : (Bpriorityclass.scheduling.k8s.io/test-priorityclass created (dry run) priorityclass.scheduling.k8s.io/test-priorityclass created (server dry run) core.sh:300: Successful get priorityclasses {{range.items}}{{ if eq .metadata.name "test-priorityclass" }}found{{end}}{{end}}:: : (Bpriorityclass.scheduling.k8s.io/test-priorityclass created core.sh:303: Successful get priorityclasses {{range.items}}{{ if eq .metadata.name "test-priorityclass" }}found{{end}}{{end}}:: found: (Bquery for priorityclasses had limit param query for events had limit param query for priorityclasses had user-specified limit param Successful describe priorityclasses verbose logs: I0318 12:50:49.301376 28486 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:50:49.308154 28486 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 6 milliseconds I0318 12:50:49.313729 28486 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses?limit=500 200 OK in 1 milliseconds I0318 12:50:49.317106 28486 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical 200 OK in 1 milliseconds I0318 12:50:49.322405 28486 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dsystem-cluster-critical%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPriorityClass%2CinvolvedObject.uid%3D67e29af4-2ae9-4b7f-9c44-bd127400767b&limit=500 200 OK in 5 milliseconds I0318 12:50:49.324388 28486 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical 200 OK in 1 milliseconds I0318 12:50:49.326154 28486 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dsystem-node-critical%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPriorityClass%2CinvolvedObject.uid%3D1068201b-0339-429f-ac26-7a9fdc51f99e&limit=500 200 OK in 1 milliseconds I0318 12:50:49.327955 28486 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses/test-priorityclass 200 OK in 1 milliseconds I0318 12:50:49.329452 28486 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dtest-priorityclass%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPriorityClass%2CinvolvedObject.uid%3Df99f0ce8-96bc-4047-9b3e-897b1567eba6&limit=500 200 OK in 1 milliseconds (Bpriorityclass.scheduling.k8s.io "test-priorityclass" deleted +++ [0318 12:50:49] Creating namespace namespace-1679143849-30637 namespace/namespace-1679143849-30637 created Context "test" modified. core.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created pod/agnhost-primary created core.sh:316: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: agnhost-primary:valid-pod: (Bcore.sh:320: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: agnhost-primary:valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted pod "agnhost-primary" force deleted core.sh:324: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:50:50] Creating namespace namespace-1679143850-3110 namespace/namespace-1679143850-3110 created Context "test" modified. core.sh:329: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:333: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:337: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod: (Bpod/valid-pod labeled (dry run) pod/valid-pod labeled (server dry run) core.sh:342: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod: (Bcore.sh:346: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod: (Bpod/valid-pod labeled core.sh:350: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod: (Bcore.sh:354: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod: (Bpod/valid-pod labeled core.sh:358: Successful get pod valid-pod {{.metadata.labels.emptylabel}}: (Bcore.sh:362: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (Bpod/valid-pod annotate (dry run) pod/valid-pod annotate (server dry run) core.sh:367: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (Bcore.sh:371: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (Bpod/valid-pod annotate core.sh:375: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (BSuccessful (Bmessage:kubectl-create kubectl-label kubectl-annotate has:kubectl-annotate core.sh:382: Successful get pod valid-pod {{range.items}}{{.metadata.annotations}}:{{end}}: (BFlag --record has been deprecated, --record will be removed in the future pod/valid-pod labeled core.sh:386: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod record-change=true --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true: (BSuccessful (Bmessage:kubectl-create kubectl-annotate kubectl-label has:kubectl-label Flag --record has been deprecated, --record will be removed in the future pod/valid-pod labeled core.sh:395: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod record-change=true --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true: (BFlag --record has been deprecated, --record will be removed in the future pod/valid-pod labeled core.sh:402: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod new-record-change=true --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true: (Bcore.sh:407: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:411: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/pod-with-precision created core.sh:419: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: pod-with-precision: (Bpod/pod-with-precision patched core.sh:425: Successful get pod pod-with-precision {{.metadata.annotations.patchkey}}: patchvalue (Bpod/pod-with-precision labeled core.sh:429: Successful get pod pod-with-precision {{.metadata.labels.labelkey}}: labelvalue (Bpod/pod-with-precision annotate core.sh:433: Successful get pod pod-with-precision {{.metadata.annotations.annotatekey}}: annotatevalue (BI0318 12:50:53.859764 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="test-kubectl-describe-pod" pod "pod-with-precision" deleted pod/test-pod created pod/test-pod annotate core.sh:443: Successful get pod test-pod {{.metadata.annotations.annotatekey}}: annotatevalue (BapiVersion: v1 kind: Pod metadata: annotations: annotatekey: localvalue labels: name: test-pod-label name: test-pod spec: containers: - image: registry.k8s.io/pause:3.9 name: kubernetes-pause core.sh:450: Successful get pod test-pod {{.metadata.annotations.annotatekey}}: annotatevalue (BSuccessful (Bmessage:apiVersion: v1 kind: Pod metadata: annotations: annotatekey: localvalue labels: name: test-pod-label name: test-pod spec: containers: - image: registry.k8s.io/pause:3.9 name: kubernetes-pause has:localvalue pod "test-pod" deleted core.sh:458: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:459: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:50:54.882948 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143850-3110/modified" clusterIPs=map[IPv4:10.0.0.183] service/modified created replicationcontroller/modified created I0318 12:50:54.952467 23056 event.go:307] "Event occurred" object="namespace-1679143850-3110/modified" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: modified-4bb47" core.sh:467: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: modified: (Bcore.sh:468: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: modified: (BSuccessful (Bmessage:kubectl-create has:kubectl-create Successful (Bmessage:kube-controller-manager kubectl-create has:kubectl-create service "modified" deleted replicationcontroller "modified" deleted core.sh:479: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:480: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:50:55.755253 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143850-3110/modified" clusterIPs=map[IPv4:10.0.0.37] service/modified created replicationcontroller/modified created I0318 12:50:55.817436 23056 event.go:307] "Event occurred" object="namespace-1679143850-3110/modified" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: modified-t5m68" core.sh:484: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: modified: (Bcore.sh:485: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: modified: (Bservice "modified" deleted replicationcontroller "modified" deleted core.sh:496: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:500: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:The request is invalid: patch: Invalid value: "map[metadata:map[labels:invalid]]": cannot restore map from string has:cannot restore map from string Successful (Bmessage:pod/valid-pod patched (no change) has:patched (no change) Flag --record has been deprecated, --record will be removed in the future pod/valid-pod patched core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (Bcore.sh:519: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]: (Bpod/valid-pod patched core.sh:523: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2: (Bpod/valid-pod patched core.sh:527: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (BFlag --record has been deprecated, --record will be removed in the future pod/valid-pod patched Flag --record has been deprecated, --record will be removed in the future pod/valid-pod patched core.sh:532: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (Bpod/valid-pod patched core.sh:537: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml: (Bpod/valid-pod patched core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:3.9: (BSuccessful (Bmessage:kubectl-create kubectl-patch has:kubectl-patch pod/valid-pod patched core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (B+++ [0318 12:50:58] "kubectl patch with resourceVersion 619" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again pod "valid-pod" deleted pod/valid-pod replaced core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname (BSuccessful (Bmessage:kubectl-replace has:kubectl-replace Successful (Bmessage:error: --grace-period must have --force specified has:\-\-grace-period must have \-\-force specified Successful (Bmessage:error: --timeout must have --force specified has:\-\-timeout must have \-\-force specified I0318 12:50:59.281689 23056 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"node-v1-test\" does not exist" node/node-v1-test created core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: : (Bnode/node-v1-test replaced (server dry run) node/node-v1-test replaced (dry run) core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: : (Bnode/node-v1-test replaced core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b (Bnode "node-v1-test" deleted core.sh:662: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (Bcore.sh:665: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/serve_hostname: (BSuccessful (Bmessage:kubectl-replace kubectl-edit has:kubectl-edit Edit cancelled, no changes made. Edit cancelled, no changes made. Edit cancelled, no changes made. Edit cancelled, no changes made. core.sh:681: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (BapiVersion: v1 kind: Pod metadata: labels: name: localonlyvalue name: test-pod spec: containers: - image: registry.k8s.io/pause:3.9 name: kubernetes-pause core.sh:686: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (BSuccessful (Bmessage:apiVersion: v1 kind: Pod metadata: labels: name: localonlyvalue name: test-pod spec: containers: - image: registry.k8s.io/pause:3.9 name: kubernetes-pause has:localonlyvalue core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (Berror: 'name' already has a value (valid-pod), and --overwrite is false core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (Bpod/valid-pod labeled core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan (Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:711: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:51:02] Creating namespace namespace-1679143862-28451 namespace/namespace-1679143862-28451 created Context "test" modified. core.sh:716: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/redis-master created pod/valid-pod created core.sh:720: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod: (Bcore.sh:724: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod: (Bpod "redis-master" deleted pod "valid-pod" deleted core.sh:728: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:51:03] Creating namespace namespace-1679143863-8287 namespace/namespace-1679143863-8287 created Context "test" modified. core.sh:734: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created core.sh:738: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (Bpod/test-pod replaced core.sh:746: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-replaced (BWarning: resource pods/test-pod is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/test-pod configured core.sh:753: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-applied (Bpod/test-pod replaced core.sh:762: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-replaced (Bpod "test-pod" deleted +++ exit code: 0 Recording: run_save_config_tests Running command: run_save_config_tests +++ Running case: test-cmd.run_save_config_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_save_config_tests +++ [0318 12:51:05] Testing kubectl --save-config +++ [0318 12:51:05] Creating namespace namespace-1679143865-5175 namespace/namespace-1679143865-5175 created Context "test" modified. save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod "test-pod" deleted +++ [0318 12:51:06] Creating namespace namespace-1679143866-13326 namespace/namespace-1679143866-13326 created Context "test" modified. save-config.sh:41: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod/test-pod edited pod "test-pod" deleted +++ [0318 12:51:06] Creating namespace namespace-1679143866-14796 namespace/namespace-1679143866-14796 created Context "test" modified. save-config.sh:56: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod/test-pod replaced pod "test-pod" deleted save-config.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/nginx created save-config.sh:74: Successful get svc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:51:08.107548 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143866-14796/nginx" clusterIPs=map[IPv4:10.0.0.193] service/nginx exposed pod "nginx" deleted service "nginx" deleted save-config.sh:83: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0318 12:51:08.634565 23056 event.go:307] "Event occurred" object="namespace-1679143866-14796/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-xhkwl" I0318 12:51:08.653885 23056 event.go:307] "Event occurred" object="namespace-1679143866-14796/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-6sg2h" I0318 12:51:08.654084 23056 event.go:307] "Event occurred" object="namespace-1679143866-14796/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-nwphc" I0318 12:51:08.745690 19996 controller.go:624] quota admission added evaluator for: horizontalpodautoscalers.autoscaling horizontalpodautoscaler.autoscaling/frontend autoscaled Successful (Bmessage:autoscaling/v2 has:autoscaling/v2 Successful (Bmessage:autoscaling/v2 has:autoscaling/v2 Successful (Bmessage:autoscaling/v2 has:autoscaling/v2 horizontalpodautoscaler.autoscaling "frontend" deleted replicationcontroller "frontend" deleted +++ exit code: 0 Recording: run_kubectl_create_error_tests Running command: run_kubectl_create_error_tests +++ Running case: test-cmd.run_kubectl_create_error_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_error_tests +++ [0318 12:51:09] Creating namespace namespace-1679143869-10982 namespace/namespace-1679143869-10982 created Context "test" modified. +++ [0318 12:51:09] Testing kubectl create with error Error: must specify one of -f and -k Create a resource from a file or from stdin. JSON and YAML formats are accepted. Examples: # Create a pod using the data in pod.json kubectl create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | kubectl create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data kubectl create -f registry.yaml --edit -o json Available Commands: clusterrole Create a cluster role clusterrolebinding Create a cluster role binding for a particular cluster role configmap Create a config map from a local file, directory or literal value cronjob Create a cron job with the specified name deployment Create a deployment with the specified name ingress Create an ingress with the specified name job Create a job with the specified name namespace Create a namespace with the specified name poddisruptionbudget Create a pod disruption budget with the specified name priorityclass Create a priority class with the specified name quota Create a quota with the specified name role Create a role with single rule rolebinding Create a role binding for a particular role or cluster role secret Create a secret using specified subcommand service Create a service using a specified subcommand serviceaccount Create a service account with the specified name token Request a service account token Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. --edit=false: Edit the API resource before creating --field-manager='kubectl-create': Name of the manager used to track field ownership. -f, --filename=[]: Filename, directory, or URL to files to use to create the resource -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. -o, --output='': Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file). --raw='': Raw URI to POST to the server. Uses the transport specified by the kubeconfig file. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. --show-managed-fields=false: If true, keep the managedFields when printing objects in JSON or YAML format. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --validate='strict': Must be one of: strict (or true), warn, ignore (or false). "true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. "warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. "false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields. --windows-line-endings=false: Only relevant if --edit=true. Defaults to the line ending native to your platform. Usage: kubectl create -f FILENAME [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). +++ exit code: 0 Recording: run_kubectl_apply_tests Running command: run_kubectl_apply_tests +++ Running case: test-cmd.run_kubectl_apply_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_apply_tests +++ [0318 12:51:09] Creating namespace namespace-1679143869-2145 namespace/namespace-1679143869-2145 created Context "test" modified. +++ [0318 12:51:09] Testing kubectl apply apply.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created apply.sh:34: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:kubectl-client-side-apply has:kubectl-client-side-apply pod "test-pod" deleted apply.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created apply.sh:49: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (Bpod/test-pod configured (dry run) pod/test-pod configured (server dry run) pod/test-pod configured pod "test-pod" deleted apply.sh:65: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:51:11.508480 19996 controller.go:624] quota admission added evaluator for: deployments.apps deployment.apps/test-deployment-retainkeys created I0318 12:51:11.522546 19996 controller.go:624] quota admission added evaluator for: replicasets.apps I0318 12:51:11.532838 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-deployment-retainkeys" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-deployment-retainkeys-6c5b6478cd to 1" I0318 12:51:11.550380 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-deployment-retainkeys-6c5b6478cd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-6c5b6478cd-vnlgc" apply.sh:69: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys (BI0318 12:51:12.093111 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-deployment-retainkeys" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set test-deployment-retainkeys-6c5b6478cd to 0 from 1" I0318 12:51:12.117915 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-deployment-retainkeys-6c5b6478cd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: test-deployment-retainkeys-6c5b6478cd-vnlgc" I0318 12:51:12.175580 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-deployment-retainkeys" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-deployment-retainkeys-d65c44c97 to 1" I0318 12:51:12.191439 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-deployment-retainkeys-d65c44c97" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-d65c44c97-lrwff" deployment.apps "test-deployment-retainkeys" deleted apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/selector-test-pod created apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod (BSuccessful (Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found has:pods "selector-test-pod-dont-apply" not found pod "selector-test-pod" deleted apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created (dry run) pod/test-pod created (server dry run) apply.sh:107: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod/test-pod configured (dry run) pod/test-pod configured (server dry run) apply.sh:115: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:664 has:664 pod "test-pod" deleted customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created I0318 12:51:14.663247 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager Successful (Bmessage:resources.mygroup.example.com has:resources.mygroup.example.com I0318 12:51:17.247486 19996 controller.go:624] quota admission added evaluator for: resources.mygroup.example.com kind.mygroup.example.com/myobj created (server dry run) I0318 12:51:17.386901 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted I0318 12:51:17.398586 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager namespace/nsb created apply.sh:181: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:184: Successful get pods a -n nsb {{.metadata.name}}: a (Bpod/b created W0318 12:51:18.545620 32145 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. pod/a pruned apply.sh:188: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b: (Bpod "b" deleted apply.sh:195: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:200: Successful get pods a {{.metadata.name}}: a (Bapply.sh:202: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/b created apply.sh:207: Successful get pods a {{.metadata.name}}: a (Bapply.sh:208: Successful get pods b -n nsb {{.metadata.name}}: b (Bpod "a" deleted pod "b" deleted Successful (Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector has:all resources selected for prune without explicitly passing --all pod/a created pod/b created I0318 12:51:21.597827 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143869-2145/prune-svc" clusterIPs=map[IPv4:10.0.0.126] service/prune-svc created W0318 12:51:21.598448 32320 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. I0318 12:51:23.757818 23056 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1679143866-14796/frontend" apply.sh:220: Successful get pods a {{.metadata.name}}: a (Bapply.sh:221: Successful get pods b -n nsb {{.metadata.name}}: b (Bpod "a" deleted pod "b" deleted namespace "nsb" deleted persistentvolumeclaim/a-pvc created W0318 12:51:31.383273 32394 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. I0318 12:51:31.383984 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0318 12:51:31.419990 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" service/prune-svc pruned apply.sh:228: Successful get pvc a-pvc {{.metadata.name}}: a-pvc (Bpersistentvolumeclaim/b-pvc created W0318 12:51:33.084677 32421 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. I0318 12:51:33.085019 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/b-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0318 12:51:33.106843 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/b-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" persistentvolumeclaim/a-pvc pruned I0318 12:51:33.125176 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" apply.sh:230: Successful get pvc b-pvc {{.metadata.name}}: b-pvc (Bapply.sh:231: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolumeclaim "b-pvc" deleted I0318 12:51:34.714827 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/b-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" apply.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created W0318 12:51:35.106022 32491 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. I0318 12:51:36.129813 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="nsb" apply.sh:240: Successful get pods a {{.metadata.name}}: a (BFlag --prune-whitelist has been deprecated, Use --prune-allowlist instead. I0318 12:51:36.615132 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143869-2145/prune-svc" clusterIPs=map[IPv4:10.0.0.181] service/prune-svc created apply.sh:243: Successful get service prune-svc {{.metadata.name}}: prune-svc (Bapply.sh:244: Successful get pods a {{.metadata.name}}: a (Bservice/prune-svc unchanged W0318 12:51:36.973619 32561 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. pod/a pruned apply.sh:247: Successful get service prune-svc {{.metadata.name}}: prune-svc (Bapply.sh:248: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bservice "prune-svc" deleted namespace/nsb created apply.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:258: Successful get pods a -n nsb {{.metadata.name}}: a (Bpod/b created apply.sh:261: Successful get pods b -n nsb {{.metadata.name}}: b (Bpod/b unchanged W0318 12:51:39.546530 32692 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. pod/a pruned apply.sh:265: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b: (Bnamespace "nsb" deleted Successful (Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation. has:the namespace from the provided object "nsb" does not match the namespace "foo". apply.sh:276: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bservice/a created apply.sh:280: Successful get services a {{.metadata.name}}: a (BSuccessful (Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set has:may not change once set I0318 12:51:47.297448 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143869-2145/a" clusterIPs=map[IPv4:10.0.0.12] service/a configured apply.sh:287: Successful get services a {{.spec.clusterIP}}: 10.0.0.12 (Bservice "a" deleted configmap/test-the-map created I0318 12:51:47.800859 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143869-2145/test-the-service" clusterIPs=map[IPv4:10.0.0.179] service/test-the-service created deployment.apps/test-the-deployment created I0318 12:51:47.838530 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6ccf78d7dd to 3" I0318 12:51:47.862345 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-gm7dl" I0318 12:51:47.878823 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-cdmfc" I0318 12:51:47.878854 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-bv7dh" apply.sh:293: Successful get configmap test-the-map {{.metadata.name}}: test-the-map (Bapply.sh:294: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment (Bapply.sh:295: Successful get service test-the-service {{.metadata.name}}: test-the-service (Bconfigmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted configmap/test-the-map created I0318 12:51:48.521666 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143869-2145/test-the-service" clusterIPs=map[IPv4:10.0.0.39] service/test-the-service created deployment.apps/test-the-deployment created I0318 12:51:48.557098 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6ccf78d7dd to 3" I0318 12:51:48.573700 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-9gtp5" I0318 12:51:48.589498 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-kzghs" I0318 12:51:48.589526 23056 event.go:307] "Event occurred" object="namespace-1679143869-2145/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-hfnzs" apply.sh:301: Successful get configmap test-the-map {{.metadata.name}}: test-the-map (Bapply.sh:302: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment (Bapply.sh:303: Successful get service test-the-service {{.metadata.name}}: test-the-service (Bconfigmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted Successful (Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found has:namespaces "multi-resource-ns" not found apply.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:namespace/multi-resource-ns created Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found has:namespaces "multi-resource-ns" not found Successful (Bmessage:Error from server (NotFound): pods "test-pod" not found has:pods "test-pod" not found pod/test-pod created namespace/multi-resource-ns unchanged apply.sh:319: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod (Bpod "test-pod" deleted namespace "multi-resource-ns" deleted I0318 12:51:51.232639 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="nsb" apply.sh:325: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:configmap/foo created error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1" ensure CRDs are installed first has:no matches for kind "Bogus" in version "example.com/v1" apply.sh:331: Successful get configmaps foo {{.metadata.name}}: foo (Bconfigmap "foo" deleted apply.sh:337: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:pod/pod-a created pod/pod-c created The Pod "POD-B" is invalid: metadata.name: Invalid value: "POD-B": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') has:The Pod "POD-B" is invalid apply.sh:341: Successful get pods pod-a {{.metadata.name}}: pod-a (Bapply.sh:342: Successful get pods pod-c {{.metadata.name}}: pod-c (Bpod "pod-a" deleted pod "pod-c" deleted apply.sh:345: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bapply.sh:349: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1" ensure CRDs are installed first has:no matches for kind "Widget" in version "example.com/v1" I0318 12:51:56.468395 19996 handler.go:165] Adding GroupVersion example.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io/widgets.example.com condition met Successful (Bmessage:Error from server (NotFound): widgets.example.com "foo" not found has:widgets.example.com "foo" not found apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com (BI0318 12:51:58.976061 19996 controller.go:624] quota admission added evaluator for: widgets.example.com widget.example.com/foo created customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged apply.sh:359: Successful get widget foo {{.metadata.name}}: foo (Bwidget.example.com "foo" deleted I0318 12:51:59.138579 19996 handler.go:165] Adding GroupVersion example.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "widgets.example.com" deleted I0318 12:51:59.149235 19996 handler.go:165] Adding GroupVersion example.com v1 to ResourceManager +++ exit code: 0 Recording: run_kubectl_server_side_apply_tests Running command: run_kubectl_server_side_apply_tests +++ Running case: test-cmd.run_kubectl_server_side_apply_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_server_side_apply_tests +++ [0318 12:51:59] Creating namespace namespace-1679143919-30781 namespace/namespace-1679143919-30781 created Context "test" modified. +++ [0318 12:51:59] Testing kubectl apply --server-side apply.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:52:00.014136 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="multi-resource-ns" pod/test-pod serverside-applied apply.sh:380: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:kubectl has:kubectl pod/test-pod serverside-applied Successful (Bmessage:my-field-manager kubectl has:my-field-manager pod "test-pod" deleted apply.sh:393: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod serverside-applied (server dry run) apply.sh:398: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod serverside-applied pod/test-pod serverside-applied (server dry run) apply.sh:405: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:899 has:899 pod "test-pod" deleted apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0318 12:52:02] Testing upgrade kubectl client-side apply to server-side apply pod/test-pod created error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts pod/test-pod serverside-applied Successful (Bmessage:{ "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "name": "test-pod-applied" }, "name": "test-pod", "namespace": "namespace-1679143919-30781" }, "spec": { "containers": [ { "image": "registry.k8s.io/pause:3.9", "name": "kubernetes-pause" } ] } } has:"name": "test-pod-applied" +++ [0318 12:52:03] Testing downgrade kubectl server-side apply to client-side apply pod/test-pod serverside-applied Successful (Bmessage:{ "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "name": "test-pod-label" }, "name": "test-pod", "namespace": "namespace-1679143919-30781" }, "spec": { "containers": [ { "image": "registry.k8s.io/pause:3.9", "name": "kubernetes-pause" } ] } } has:"name": "test-pod-label" pod/test-pod configured pod "test-pod" deleted Successful (Bmessage:configmap/test created has:configmap/test created Successful (Bmessage:configmap/test serverside-applied has:configmap/test serverside-applied Successful (Bmessage:configmap/test serverside-applied has:configmap/test serverside-applied apply.sh:488: Successful get configmap test {{ .data.key }}: value (Bapply.sh:489: Successful get configmap test {{ .data.legacy }}: (Bapply.sh:490: Successful get configmap test {{ .data.ssaKey }}: ssaValue (Bapply.sh:505: Successful get configmap test {{ .data.key }}: value (Bapply.sh:506: Successful get configmap test {{ .data.newKey }}: newValue (Bapply.sh:507: Successful get configmap test {{ .data.ssaKey }}: (Bapply.sh:523: Successful get configmap test {{ .data.key }}: value (Bapply.sh:524: Successful get configmap test {{ .data.newKey }}: (Bapply.sh:525: Successful get configmap test {{ .data.ssaKey }}: ssaValue (BSuccessful (Bmessage:diff -u -N /tmp/LIVE-170582131/v1.ConfigMap.namespace-1679143919-30781.test /tmp/MERGED-1133575692/v1.ConfigMap.namespace-1679143919-30781.test --- /tmp/LIVE-170582131/v1.ConfigMap.namespace-1679143919-30781.test 2023-03-18 12:52:05.550286870 +0000 +++ /tmp/MERGED-1133575692/v1.ConfigMap.namespace-1679143919-30781.test 2023-03-18 12:52:05.550286870 +0000 @@ -1,12 +1,13 @@ apiVersion: v1 data: key: value - ssaKey: ssaValue + newKey: newValue kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","data":{"key":"value","ssaKey":"ssaValue"},"kind":"ConfigMap","metadata":{"name":"test","namespace":"namespace-1679143919-30781"}} + {"apiVersion":"v1","data":{"key":"value","newKey":"newValue"},"kind":"ConfigMap","metadata":{"annotations":{"newAnnotation":"newValue"},"name":"test","namespace":"namespace-1679143919-30781"}} + newAnnotation: newValue creationTimestamp: "2023-03-18T12:52:03Z" name: test namespace: namespace-1679143919-30781 has:+ newKey: newValue Successful (Bmessage:diff -u -N /tmp/LIVE-170582131/v1.ConfigMap.namespace-1679143919-30781.test /tmp/MERGED-1133575692/v1.ConfigMap.namespace-1679143919-30781.test --- /tmp/LIVE-170582131/v1.ConfigMap.namespace-1679143919-30781.test 2023-03-18 12:52:05.550286870 +0000 +++ /tmp/MERGED-1133575692/v1.ConfigMap.namespace-1679143919-30781.test 2023-03-18 12:52:05.550286870 +0000 @@ -1,12 +1,13 @@ apiVersion: v1 data: key: value - ssaKey: ssaValue + newKey: newValue kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","data":{"key":"value","ssaKey":"ssaValue"},"kind":"ConfigMap","metadata":{"name":"test","namespace":"namespace-1679143919-30781"}} + {"apiVersion":"v1","data":{"key":"value","newKey":"newValue"},"kind":"ConfigMap","metadata":{"annotations":{"newAnnotation":"newValue"},"name":"test","namespace":"namespace-1679143919-30781"}} + newAnnotation: newValue creationTimestamp: "2023-03-18T12:52:03Z" name: test namespace: namespace-1679143919-30781 has:+ newAnnotation: newValue configmap "test" deleted Successful (Bmessage:configmap/ssa-test created has:configmap/ssa-test created apply.sh:559: Successful get configmap ssa-test {{ .data.key }}: value1 (BSuccessful (Bmessage:configmap/ssa-test serverside-applied has:configmap/ssa-test serverside-applied apply.sh:577: Successful get configmap ssa-test {{ .data.key }}: value1 (BSuccessful (Bmessage:configmap/ssa-test serverside-applied has:configmap/ssa-test serverside-applied apply.sh:594: Successful get configmap ssa-test {{ .data.key }}: value2 (Bapply.sh:595: Successful get configmap ssa-test {{ .data.legacy }}: (Bconfigmap "ssa-test" deleted customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created I0318 12:52:07.103099 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager Successful (Bmessage:resources.mygroup.example.com has:resources.mygroup.example.com kind.mygroup.example.com/myobj serverside-applied (server dry run) I0318 12:52:07.485844 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted I0318 12:52:07.499410 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager +++ exit code: 0 Recording: run_kubectl_run_tests Running command: run_kubectl_run_tests +++ Running case: test-cmd.run_kubectl_run_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_run_tests +++ [0318 12:52:07] Creating namespace namespace-1679143927-2947 namespace/namespace-1679143927-2947 created Context "test" modified. +++ [0318 12:52:07] Testing kubectl run pod/nginx-extensions created (dry run) pod/nginx-extensions created (server dry run) run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/nginx-extensions created run.sh:39: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: nginx-extensions: (Bpod "nginx-extensions" deleted Successful (Bmessage:pod/test1 created has:pod/test1 created pod "test1" deleted Successful (Bmessage:error: Invalid image name "InvalidImageName": invalid reference format has:error: Invalid image name "InvalidImageName": invalid reference format +++ exit code: 0 Recording: run_kubectl_create_filter_tests Running command: run_kubectl_create_filter_tests +++ Running case: test-cmd.run_kubectl_create_filter_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_filter_tests +++ [0318 12:52:08] Creating namespace namespace-1679143928-31748 namespace/namespace-1679143928-31748 created Context "test" modified. +++ [0318 12:52:08] Testing kubectl create filter create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/selector-test-pod created create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod (BSuccessful (Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found has:pods "selector-test-pod-dont-apply" not found pod "selector-test-pod" deleted +++ exit code: 0 Recording: run_kubectl_apply_deployments_tests Running command: run_kubectl_apply_deployments_tests +++ Running case: test-cmd.run_kubectl_apply_deployments_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_apply_deployments_tests +++ [0318 12:52:09] Creating namespace namespace-1679143929-14049 namespace/namespace-1679143929-14049 created Context "test" modified. +++ [0318 12:52:09] Testing kubectl apply deployments apps.sh:150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:151: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/my-depl created I0318 12:52:09.823713 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/my-depl" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set my-depl-bfb57d6df to 1" I0318 12:52:09.865094 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/my-depl-bfb57d6df" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-depl-bfb57d6df-jp424" apps.sh:156: Successful get deployments my-depl {{.metadata.name}}: my-depl (Bapps.sh:158: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1 (Bapps.sh:159: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1 (Bapps.sh:160: Successful get deployments my-depl {{.metadata.labels.l1}}: l1 (Bdeployment.apps/my-depl configured apps.sh:165: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1 (Bapps.sh:166: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1 (Bapps.sh:167: Successful get deployments my-depl {{.metadata.labels.l1}}: (Bdeployment.apps "my-depl" deleted replicaset.apps "my-depl-bfb57d6df" deleted pod "my-depl-bfb57d6df-jp424" deleted E0318 12:52:10.702168 23056 replica_set.go:544] sync "namespace-1679143929-14049/my-depl-bfb57d6df" failed with replicasets.apps "my-depl-bfb57d6df" not found apps.sh:173: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:174: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:175: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:179: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx created I0318 12:52:11.286900 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5645b79496 to 3" I0318 12:52:11.319650 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-zthv9" I0318 12:52:11.335484 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-kq95p" I0318 12:52:11.335543 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-5dmr5" apps.sh:183: Successful get deployment nginx {{.metadata.name}}: nginx (BSuccessful (Bmessage:Error from server (Conflict): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1679143929-14049\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"registry.k8s.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}} to: Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment" Name: "nginx", Namespace: "namespace-1679143929-14049" for: "hack/testdata/deployment-label-change2.yaml": error when patching "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again has:Error from server (Conflict) deployment.apps/nginx configured I0318 12:52:19.854547 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5675dfc785 to 3" I0318 12:52:19.870773 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-88f4n" I0318 12:52:19.890926 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-mw89l" I0318 12:52:19.891209 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-qwd46" Successful (Bmessage: "name": "nginx2" "name": "nginx2" has:"name": "nginx2" Successful (Bmessage:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels` has:Invalid value I0318 12:52:24.195985 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5675dfc785 to 3" I0318 12:52:24.208620 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-qf9sq" I0318 12:52:24.220860 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-4w6hk" I0318 12:52:24.221102 23056 event.go:307] "Event occurred" object="namespace-1679143929-14049/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-lbg8l" apps.sh:203: Successful get deployment nginx {{.spec.template.metadata.labels.name}}: nginx2 (Bdeployment.apps "nginx" deleted +++ exit code: 0 Recording: run_kubectl_diff_tests Running command: run_kubectl_diff_tests +++ Running case: test-cmd.run_kubectl_diff_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_diff_tests +++ [0318 12:52:24] Creating namespace namespace-1679143944-21585 namespace/namespace-1679143944-21585 created Context "test" modified. +++ [0318 12:52:24] Testing kubectl diff Successful (Bmessage:diff -u -N /tmp/LIVE-1881965448/v1.Pod.namespace-1679143944-21585.test-pod /tmp/MERGED-3752225721/v1.Pod.namespace-1679143944-21585.test-pod --- /tmp/LIVE-1881965448/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:24.719963111 +0000 +++ /tmp/MERGED-3752225721/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:24.723963461 +0000 @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2023-03-18T12:52:24Z" + labels: + name: test-pod-label + name: test-pod + namespace: namespace-1679143944-21585 + uid: 58bdad11-7de5-43b7-9429-3bdda9120b68 +spec: + containers: + - image: registry.k8s.io/pause:3.9 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has:test-pod diff.sh:33: Successful get pod {{range.items}}{{ if eq .metadata.name "test-pod" }}found{{end}}{{end}}:: : (Bpod/test-pod created diff.sh:36: Successful get pod {{range.items}}{{ if eq .metadata.name "test-pod" }}found{{end}}{{end}}:: found: (BSuccessful (Bmessage:1064 has:1064 Successful (Bmessage:diff -u -N /tmp/LIVE-2376230994/v1.Pod.namespace-1679143944-21585.test-pod /tmp/MERGED-211316383/v1.Pod.namespace-1679143944-21585.test-pod --- /tmp/LIVE-2376230994/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:25.520033074 +0000 +++ /tmp/MERGED-211316383/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:25.520033074 +0000 @@ -13,7 +13,7 @@ uid: c5334185-e4a3-412c-868d-90e16d3cb04b spec: containers: - - image: registry.k8s.io/pause:3.9 + - image: registry.k8s.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has:registry.k8s.io/pause:3.4 Successful (Bmessage:diff -u -N /tmp/LIVE-2376230994/v1.Pod.namespace-1679143944-21585.test-pod /tmp/MERGED-211316383/v1.Pod.namespace-1679143944-21585.test-pod --- /tmp/LIVE-2376230994/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:25.520033074 +0000 +++ /tmp/MERGED-211316383/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:25.520033074 +0000 @@ -13,7 +13,7 @@ uid: c5334185-e4a3-412c-868d-90e16d3cb04b spec: containers: - - image: registry.k8s.io/pause:3.9 + - image: registry.k8s.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has not:exit status 1 Successful (Bmessage:1064 has:1064 Successful (Bmessage:diff -u -N /tmp/LIVE-793672626/v1.Pod.namespace-1679143944-21585.test-pod /tmp/MERGED-3675599461/v1.Pod.namespace-1679143944-21585.test-pod --- /tmp/LIVE-793672626/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:25.644043918 +0000 +++ /tmp/MERGED-3675599461/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:25.644043918 +0000 @@ -3,7 +3,7 @@ metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"test-pod-label"},"name":"test-pod","namespace":"namespace-1679143944-21585"},"spec":{"containers":[{"image":"registry.k8s.io/pause:3.9","name":"kubernetes-pause"}]}} + {"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"name":"test-pod-label"},"name":"test-pod","namespace":"namespace-1679143944-21585"},"spec":{"containers":[{"image":"registry.k8s.io/pause:3.4","name":"kubernetes-pause"}]}} creationTimestamp: "2023-03-18T12:52:24Z" labels: name: test-pod-label @@ -13,7 +13,7 @@ uid: c5334185-e4a3-412c-868d-90e16d3cb04b spec: containers: - - image: registry.k8s.io/pause:3.9 + - image: registry.k8s.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has:registry.k8s.io/pause:3.4 Successful (Bmessage:1064 has:1064 The Pod "test" is invalid: spec.containers[0].name: Required value pod "test-pod" deleted +++ [0318 12:52:25] Testing kubectl diff with server-side apply Successful (Bmessage:diff -u -N /tmp/LIVE-2093357168/v1.Pod.namespace-1679143944-21585.test-pod /tmp/MERGED-3360120879/v1.Pod.namespace-1679143944-21585.test-pod --- /tmp/LIVE-2093357168/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:26.036078199 +0000 +++ /tmp/MERGED-3360120879/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:26.040078549 +0000 @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2023-03-18T12:52:26Z" + labels: + name: test-pod-label + name: test-pod + namespace: namespace-1679143944-21585 + uid: 4548957c-5156-4d97-8d30-2f70dc7e35ee +spec: + containers: + - image: registry.k8s.io/pause:3.9 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has:test-pod diff.sh:78: Successful get pod {{range.items}}{{ if eq .metadata.name "test-pod" }}found{{end}}{{end}}:: : (Bpod/test-pod serverside-applied diff.sh:82: Successful get pod {{range.items}}{{ if eq .metadata.name "test-pod" }}found{{end}}{{end}}:: found: (BSuccessful (Bmessage:diff -u -N /tmp/LIVE-2189830100/v1.Pod.namespace-1679143944-21585.test-pod /tmp/MERGED-4274088931/v1.Pod.namespace-1679143944-21585.test-pod --- /tmp/LIVE-2189830100/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:26.556123675 +0000 +++ /tmp/MERGED-4274088931/v1.Pod.namespace-1679143944-21585.test-pod 2023-03-18 12:52:26.556123675 +0000 @@ -10,7 +10,7 @@ uid: 425c9f42-ea76-4cb0-8925-775d3431f724 spec: containers: - - image: registry.k8s.io/pause:3.9 + - image: registry.k8s.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has:registry.k8s.io/pause:3.4 namespace/nsb created pod/a created diff.sh:96: Successful get pods a -n nsb {{.metadata.name}}: a (BSuccessful (Bmessage:diff -u -N /tmp/LIVE-764414890/v1.Pod.nsb.b /tmp/MERGED-855870546/v1.Pod.nsb.b --- /tmp/LIVE-764414890/v1.Pod.nsb.b 2023-03-18 12:52:27.080169501 +0000 +++ /tmp/MERGED-855870546/v1.Pod.nsb.b 2023-03-18 12:52:27.084169851 +0000 @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2023-03-18T12:52:27Z" + labels: + prune-group: "true" + name: b + namespace: nsb + uid: a7474f81-074b-440f-8125-a862632e260c +spec: + containers: + - image: registry.k8s.io/pause:3.7 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has not:name: a W0318 12:52:27.254753 35336 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. Successful (Bmessage:diff -u -N /tmp/LIVE-4118598133/v1.Pod.nsb.a /tmp/MERGED-971800889/v1.Pod.nsb.a --- /tmp/LIVE-4118598133/v1.Pod.nsb.a 2023-03-18 12:52:28.452289488 +0000 +++ /tmp/MERGED-971800889/v1.Pod.nsb.a 1970-01-01 00:00:00.000000000 +0000 @@ -1,62 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"prune-group":"true"},"name":"a","namespace":"nsb"},"spec":{"containers":[{"image":"registry.k8s.io/pause:3.9","name":"kubernetes-pause"}]}} - creationTimestamp: "2023-03-18T12:52:26Z" - labels: - prune-group: "true" - managedFields: - - apiVersion: v1 - fieldsType: FieldsV1 - fieldsV1: - f:metadata: - f:annotations: - .: {} - f:kubectl.kubernetes.io/last-applied-configuration: {} - f:labels: - .: {} - f:prune-group: {} - f:spec: - f:containers: - k:{"name":"kubernetes-pause"}: - .: {} - f:image: {} - f:imagePullPolicy: {} - f:name: {} - f:resources: {} - f:terminationMessagePath: {} - f:terminationMessagePolicy: {} - f:dnsPolicy: {} - f:enableServiceLinks: {} - f:restartPolicy: {} - f:schedulerName: {} - f:securityContext: {} - f:terminationGracePeriodSeconds: {} - manager: kubectl-client-side-apply - operation: Update - time: "2023-03-18T12:52:26Z" - name: a - namespace: nsb - resourceVersion: "1072" - uid: 326b66cf-cc0c-4bac-ae3b-5d20c10f2486 -spec: - containers: - - image: registry.k8s.io/pause:3.9 - imagePullPolicy: IfNotPresent - name: kubernetes-pause - resources: {} - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - enableServiceLinks: true - preemptionPolicy: PreemptLowerPriority - priority: 0 - restartPolicy: Always - schedulerName: default-scheduler - securityContext: {} - terminationGracePeriodSeconds: 30 -status: - phase: Pending - qosClass: BestEffort diff -u -N /tmp/LIVE-4118598133/v1.Pod.nsb.b /tmp/MERGED-971800889/v1.Pod.nsb.b --- /tmp/LIVE-4118598133/v1.Pod.nsb.b 2023-03-18 12:52:27.248184193 +0000 +++ /tmp/MERGED-971800889/v1.Pod.nsb.b 2023-03-18 12:52:27.248184193 +0000 @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2023-03-18T12:52:27Z" + labels: + prune-group: "true" + name: b + namespace: nsb + uid: 621303a8-5e79-43e3-9f9c-90da80141fd9 +spec: + containers: + - image: registry.k8s.io/pause:3.7 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has:name: a pod/b created W0318 12:52:28.696846 35348 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. pod/a pruned diff.sh:109: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b: (BW0318 12:52:30.350541 35374 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. pod "test-pod" deleted pod "b" deleted namespace "nsb" deleted namespace/nsbprune created pod/a created pod/b created pod/c created diff.sh:157: Successful get pods a -n nsbprune {{.metadata.name}}: a (Bdiff.sh:158: Successful get pods b -n nsbprune {{.metadata.name}}: b (Bdiff.sh:159: Successful get pods c -n nsbprune {{.metadata.name}}: c (BSuccessful (Bmessage: has not:name: b Successful (Bmessage: has not:name: c W0318 12:52:37.847850 35506 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag. Successful (Bmessage:diff -u -N /tmp/LIVE-4187292266/v1.Pod.nsbprune.b /tmp/MERGED-1028765149/v1.Pod.nsbprune.b --- /tmp/LIVE-4187292266/v1.Pod.nsbprune.b 2023-03-18 12:52:39.045215817 +0000 +++ /tmp/MERGED-1028765149/v1.Pod.nsbprune.b 1970-01-01 00:00:00.000000000 +0000 @@ -1,62 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"prune-group":"true"},"name":"b","namespace":"nsbprune"},"spec":{"containers":[{"image":"registry.k8s.io/pause:3.9","name":"kubernetes-pause"}]}} - creationTimestamp: "2023-03-18T12:52:37Z" - labels: - prune-group: "true" - managedFields: - - apiVersion: v1 - fieldsType: FieldsV1 - fieldsV1: - f:metadata: - f:annotations: - .: {} - f:kubectl.kubernetes.io/last-applied-configuration: {} - f:labels: - .: {} - f:prune-group: {} - f:spec: - f:containers: - k:{"name":"kubernetes-pause"}: - .: {} - f:image: {} - f:imagePullPolicy: {} - f:name: {} - f:resources: {} - f:terminationMessagePath: {} - f:terminationMessagePolicy: {} - f:dnsPolicy: {} - f:enableServiceLinks: {} - f:restartPolicy: {} - f:schedulerName: {} - f:securityContext: {} - f:terminationGracePeriodSeconds: {} - manager: kubectl-client-side-apply - operation: Update - time: "2023-03-18T12:52:37Z" - name: b - namespace: nsbprune - resourceVersion: "1096" - uid: 5379568d-04ae-4971-b0ac-34966637f75e -spec: - containers: - - image: registry.k8s.io/pause:3.9 - imagePullPolicy: IfNotPresent - name: kubernetes-pause - resources: {} - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - enableServiceLinks: true - preemptionPolicy: PreemptLowerPriority - priority: 0 - restartPolicy: Always - schedulerName: default-scheduler - securityContext: {} - terminationGracePeriodSeconds: 30 -status: - phase: Pending - qosClass: BestEffort has:name: b Successful (Bmessage:diff -u -N /tmp/LIVE-4187292266/v1.Pod.nsbprune.b /tmp/MERGED-1028765149/v1.Pod.nsbprune.b --- /tmp/LIVE-4187292266/v1.Pod.nsbprune.b 2023-03-18 12:52:39.045215817 +0000 +++ /tmp/MERGED-1028765149/v1.Pod.nsbprune.b 1970-01-01 00:00:00.000000000 +0000 @@ -1,62 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"prune-group":"true"},"name":"b","namespace":"nsbprune"},"spec":{"containers":[{"image":"registry.k8s.io/pause:3.9","name":"kubernetes-pause"}]}} - creationTimestamp: "2023-03-18T12:52:37Z" - labels: - prune-group: "true" - managedFields: - - apiVersion: v1 - fieldsType: FieldsV1 - fieldsV1: - f:metadata: - f:annotations: - .: {} - f:kubectl.kubernetes.io/last-applied-configuration: {} - f:labels: - .: {} - f:prune-group: {} - f:spec: - f:containers: - k:{"name":"kubernetes-pause"}: - .: {} - f:image: {} - f:imagePullPolicy: {} - f:name: {} - f:resources: {} - f:terminationMessagePath: {} - f:terminationMessagePolicy: {} - f:dnsPolicy: {} - f:enableServiceLinks: {} - f:restartPolicy: {} - f:schedulerName: {} - f:securityContext: {} - f:terminationGracePeriodSeconds: {} - manager: kubectl-client-side-apply - operation: Update - time: "2023-03-18T12:52:37Z" - name: b - namespace: nsbprune - resourceVersion: "1096" - uid: 5379568d-04ae-4971-b0ac-34966637f75e -spec: - containers: - - image: registry.k8s.io/pause:3.9 - imagePullPolicy: IfNotPresent - name: kubernetes-pause - resources: {} - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - enableServiceLinks: true - preemptionPolicy: PreemptLowerPriority - priority: 0 - restartPolicy: Always - schedulerName: default-scheduler - securityContext: {} - terminationGracePeriodSeconds: 30 -status: - phase: Pending - qosClass: BestEffort has not:name: c namespace "nsbprune" deleted I0318 12:52:41.916139 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="nsb" +++ exit code: 0 Recording: run_kubectl_diff_same_names Running command: run_kubectl_diff_same_names +++ Running case: test-cmd.run_kubectl_diff_same_names +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_diff_same_names +++ [0318 12:52:44] Creating namespace namespace-1679143964-22608 namespace/namespace-1679143964-22608 created Context "test" modified. +++ [0318 12:52:44] Test kubectl diff with multiple resources with the same name Successful (Bmessage:/tmp/LIVE-1911774328 /tmp/LIVE-1911774328/v1.Secret.namespace-1679143964-22608.test /tmp/LIVE-1911774328/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.Pod.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.ConfigMap.namespace-1679143964-22608.test /tmp/MERGED-2330284339 /tmp/MERGED-2330284339/v1.Secret.namespace-1679143964-22608.test /tmp/MERGED-2330284339/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.Pod.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.ConfigMap.namespace-1679143964-22608.test has:v1\.Pod\..*\.test Successful (Bmessage:/tmp/LIVE-1911774328 /tmp/LIVE-1911774328/v1.Secret.namespace-1679143964-22608.test /tmp/LIVE-1911774328/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.Pod.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.ConfigMap.namespace-1679143964-22608.test /tmp/MERGED-2330284339 /tmp/MERGED-2330284339/v1.Secret.namespace-1679143964-22608.test /tmp/MERGED-2330284339/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.Pod.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.ConfigMap.namespace-1679143964-22608.test has:apps\.v1\.Deployment\..*\.test Successful (Bmessage:/tmp/LIVE-1911774328 /tmp/LIVE-1911774328/v1.Secret.namespace-1679143964-22608.test /tmp/LIVE-1911774328/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.Pod.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.ConfigMap.namespace-1679143964-22608.test /tmp/MERGED-2330284339 /tmp/MERGED-2330284339/v1.Secret.namespace-1679143964-22608.test /tmp/MERGED-2330284339/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.Pod.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.ConfigMap.namespace-1679143964-22608.test has:v1\.ConfigMap\..*\.test Successful (Bmessage:/tmp/LIVE-1911774328 /tmp/LIVE-1911774328/v1.Secret.namespace-1679143964-22608.test /tmp/LIVE-1911774328/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.Pod.namespace-1679143964-22608.test /tmp/LIVE-1911774328/v1.ConfigMap.namespace-1679143964-22608.test /tmp/MERGED-2330284339 /tmp/MERGED-2330284339/v1.Secret.namespace-1679143964-22608.test /tmp/MERGED-2330284339/apps.v1.Deployment.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.Pod.namespace-1679143964-22608.test /tmp/MERGED-2330284339/v1.ConfigMap.namespace-1679143964-22608.test has:v1\.Secret\..*\.test +++ exit code: 0 Recording: run_kubectl_get_tests Running command: run_kubectl_get_tests +++ Running case: test-cmd.run_kubectl_get_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_get_tests +++ [0318 12:52:44] Creating namespace namespace-1679143964-25511 namespace/namespace-1679143964-25511 created Context "test" modified. +++ [0318 12:52:44] Testing kubectl get get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Error from server (NotFound): pods "abc" not found has:pods "abc" not found get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Error from server (NotFound): pods "abc" not found has:pods "abc" not found get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:{ "apiVersion": "v1", "items": [], "kind": "List", "metadata": { "resourceVersion": "" } } has not:No resources found Successful (Bmessage:apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" has not:No resources found Successful (Bmessage: has not:No resources found Successful (Bmessage:[] has not:No resources found Successful (Bmessage:[] has not:No resources found Successful (Bmessage:NAME has not:No resources found get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:error: the server doesn't have a resource type "foobar" has not:No resources found Successful (Bmessage:No resources found in namespace-1679143964-25511 namespace. has:No resources found Successful (Bmessage: has not:No resources found Successful (Bmessage:No resources found in namespace-1679143964-25511 namespace. has:No resources found get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Error from server (NotFound): pods "abc" not found has:pods "abc" not found Successful (Bmessage:Error from server (NotFound): pods "abc" not found has not:List Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/api/v1/namespaces/default/pods 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/api/v1/namespaces/default/replicationcontrollers 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/api/v1/namespaces/default/services 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/apis/apps/v1/namespaces/default/daemonsets 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/apis/apps/v1/namespaces/default/deployments 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/apis/apps/v1/namespaces/default/replicasets 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/apis/apps/v1/namespaces/default/statefulsets 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has:/apis/batch/v1/namespaces/default/jobs 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has not:/apis/extensions/v1beta1/namespaces/default/daemonsets 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has not:/apis/extensions/v1beta1/namespaces/default/deployments 200 OK Successful (Bmessage:I0318 12:52:46.178110 35954 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.183188 35954 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.197625 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0318 12:52:46.199361 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0318 12:52:46.201183 35954 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0318 12:52:46.203074 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0318 12:52:46.205076 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0318 12:52:46.206366 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0318 12:52:46.207799 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0318 12:52:46.209016 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0318 12:52:46.210217 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 0 milliseconds I0318 12:52:46.211430 35954 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 3m17s has not:/apis/extensions/v1beta1/namespaces/default/replicasets 200 OK Successful (Bmessage:I0318 12:52:46.279103 35977 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.284334 35977 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.290533 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=10 200 OK in 2 milliseconds I0318 12:52:46.293538 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06YWdncmVnYXRlLXRvLXZpZXdcdTAwMDAifQ&limit=10 200 OK in 1 milliseconds I0318 12:52:46.295890 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpjZXJ0aWZpY2F0ZS1jb250cm9sbGVyXHUwMDAwIn0&limit=10 200 OK in 1 milliseconds I0318 12:52:46.298260 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpleHBhbmQtY29udHJvbGxlclx1MDAwMCJ9&limit=10 200 OK in 1 milliseconds I0318 12:52:46.300677 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpyZXBsaWNhc2V0LWNvbnRyb2xsZXJcdTAwMDAifQ&limit=10 200 OK in 1 milliseconds I0318 12:52:46.303105 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06ZGlzY292ZXJ5XHUwMDAwIn0&limit=10 200 OK in 1 milliseconds I0318 12:52:46.305264 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06bm9kZS1wcm9ibGVtLWRldGVjdG9yXHUwMDAwIn0&limit=10 200 OK in 1 milliseconds NAME CREATED AT admin 2023-03-18T12:49:27Z aggregation-reader 2023-03-18T12:50:18Z cluster-admin 2023-03-18T12:49:27Z edit 2023-03-18T12:49:27Z pod-admin 2023-03-18T12:50:17Z resource-reader 2023-03-18T12:50:18Z resourcename-reader 2023-03-18T12:50:18Z system:aggregate-to-admin 2023-03-18T12:49:27Z system:aggregate-to-edit 2023-03-18T12:49:27Z system:aggregate-to-view 2023-03-18T12:49:27Z system:auth-delegator 2023-03-18T12:49:28Z system:basic-user 2023-03-18T12:49:27Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2023-03-18T12:49:28Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2023-03-18T12:49:28Z system:certificates.k8s.io:kube-apiserver-client-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:kubelet-serving-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:legacy-unknown-approver 2023-03-18T12:49:28Z system:controller:attachdetach-controller 2023-03-18T12:49:28Z system:controller:certificate-controller 2023-03-18T12:49:28Z system:controller:clusterrole-aggregation-controller 2023-03-18T12:49:28Z system:controller:cronjob-controller 2023-03-18T12:49:28Z system:controller:daemon-set-controller 2023-03-18T12:49:28Z system:controller:deployment-controller 2023-03-18T12:49:28Z system:controller:disruption-controller 2023-03-18T12:49:28Z system:controller:endpoint-controller 2023-03-18T12:49:28Z system:controller:endpointslice-controller 2023-03-18T12:49:28Z system:controller:endpointslicemirroring-controller 2023-03-18T12:49:28Z system:controller:ephemeral-volume-controller 2023-03-18T12:49:28Z system:controller:expand-controller 2023-03-18T12:49:28Z system:controller:generic-garbage-collector 2023-03-18T12:49:28Z system:controller:horizontal-pod-autoscaler 2023-03-18T12:49:28Z system:controller:job-controller 2023-03-18T12:49:28Z system:controller:namespace-controller 2023-03-18T12:49:28Z system:controller:node-controller 2023-03-18T12:49:28Z system:controller:persistent-volume-binder 2023-03-18T12:49:28Z system:controller:pod-garbage-collector 2023-03-18T12:49:28Z system:controller:pv-protection-controller 2023-03-18T12:49:28Z system:controller:pvc-protection-controller 2023-03-18T12:49:28Z system:controller:replicaset-controller 2023-03-18T12:49:28Z system:controller:replication-controller 2023-03-18T12:49:28Z system:controller:resourcequota-controller 2023-03-18T12:49:28Z system:controller:root-ca-cert-publisher 2023-03-18T12:49:28Z system:controller:route-controller 2023-03-18T12:49:28Z system:controller:service-account-controller 2023-03-18T12:49:28Z system:controller:service-controller 2023-03-18T12:49:28Z system:controller:statefulset-controller 2023-03-18T12:49:28Z system:controller:ttl-after-finished-controller 2023-03-18T12:49:28Z system:controller:ttl-controller 2023-03-18T12:49:28Z system:discovery 2023-03-18T12:49:27Z system:heapster 2023-03-18T12:49:27Z system:kube-aggregator 2023-03-18T12:49:28Z system:kube-controller-manager 2023-03-18T12:49:28Z system:kube-dns 2023-03-18T12:49:28Z system:kube-scheduler 2023-03-18T12:49:28Z system:kubelet-api-admin 2023-03-18T12:49:27Z system:monitoring 2023-03-18T12:49:27Z system:node 2023-03-18T12:49:27Z system:node-bootstrapper 2023-03-18T12:49:28Z system:node-problem-detector 2023-03-18T12:49:27Z system:node-proxier 2023-03-18T12:49:28Z system:persistent-volume-provisioner 2023-03-18T12:49:28Z system:public-info-viewer 2023-03-18T12:49:27Z system:service-account-issuer-discovery 2023-03-18T12:49:28Z system:volume-scheduler 2023-03-18T12:49:28Z url-reader 2023-03-18T12:50:18Z view 2023-03-18T12:49:27Z has:/clusterroles?limit=10 200 OK Successful (Bmessage:I0318 12:52:46.279103 35977 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.284334 35977 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.290533 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=10 200 OK in 2 milliseconds I0318 12:52:46.293538 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06YWdncmVnYXRlLXRvLXZpZXdcdTAwMDAifQ&limit=10 200 OK in 1 milliseconds I0318 12:52:46.295890 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpjZXJ0aWZpY2F0ZS1jb250cm9sbGVyXHUwMDAwIn0&limit=10 200 OK in 1 milliseconds I0318 12:52:46.298260 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpleHBhbmQtY29udHJvbGxlclx1MDAwMCJ9&limit=10 200 OK in 1 milliseconds I0318 12:52:46.300677 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpyZXBsaWNhc2V0LWNvbnRyb2xsZXJcdTAwMDAifQ&limit=10 200 OK in 1 milliseconds I0318 12:52:46.303105 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06ZGlzY292ZXJ5XHUwMDAwIn0&limit=10 200 OK in 1 milliseconds I0318 12:52:46.305264 35977 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTEyMSwic3RhcnQiOiJzeXN0ZW06bm9kZS1wcm9ibGVtLWRldGVjdG9yXHUwMDAwIn0&limit=10 200 OK in 1 milliseconds NAME CREATED AT admin 2023-03-18T12:49:27Z aggregation-reader 2023-03-18T12:50:18Z cluster-admin 2023-03-18T12:49:27Z edit 2023-03-18T12:49:27Z pod-admin 2023-03-18T12:50:17Z resource-reader 2023-03-18T12:50:18Z resourcename-reader 2023-03-18T12:50:18Z system:aggregate-to-admin 2023-03-18T12:49:27Z system:aggregate-to-edit 2023-03-18T12:49:27Z system:aggregate-to-view 2023-03-18T12:49:27Z system:auth-delegator 2023-03-18T12:49:28Z system:basic-user 2023-03-18T12:49:27Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2023-03-18T12:49:28Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2023-03-18T12:49:28Z system:certificates.k8s.io:kube-apiserver-client-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:kubelet-serving-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:legacy-unknown-approver 2023-03-18T12:49:28Z system:controller:attachdetach-controller 2023-03-18T12:49:28Z system:controller:certificate-controller 2023-03-18T12:49:28Z system:controller:clusterrole-aggregation-controller 2023-03-18T12:49:28Z system:controller:cronjob-controller 2023-03-18T12:49:28Z system:controller:daemon-set-controller 2023-03-18T12:49:28Z system:controller:deployment-controller 2023-03-18T12:49:28Z system:controller:disruption-controller 2023-03-18T12:49:28Z system:controller:endpoint-controller 2023-03-18T12:49:28Z system:controller:endpointslice-controller 2023-03-18T12:49:28Z system:controller:endpointslicemirroring-controller 2023-03-18T12:49:28Z system:controller:ephemeral-volume-controller 2023-03-18T12:49:28Z system:controller:expand-controller 2023-03-18T12:49:28Z system:controller:generic-garbage-collector 2023-03-18T12:49:28Z system:controller:horizontal-pod-autoscaler 2023-03-18T12:49:28Z system:controller:job-controller 2023-03-18T12:49:28Z system:controller:namespace-controller 2023-03-18T12:49:28Z system:controller:node-controller 2023-03-18T12:49:28Z system:controller:persistent-volume-binder 2023-03-18T12:49:28Z system:controller:pod-garbage-collector 2023-03-18T12:49:28Z system:controller:pv-protection-controller 2023-03-18T12:49:28Z system:controller:pvc-protection-controller 2023-03-18T12:49:28Z system:controller:replicaset-controller 2023-03-18T12:49:28Z system:controller:replication-controller 2023-03-18T12:49:28Z system:controller:resourcequota-controller 2023-03-18T12:49:28Z system:controller:root-ca-cert-publisher 2023-03-18T12:49:28Z system:controller:route-controller 2023-03-18T12:49:28Z system:controller:service-account-controller 2023-03-18T12:49:28Z system:controller:service-controller 2023-03-18T12:49:28Z system:controller:statefulset-controller 2023-03-18T12:49:28Z system:controller:ttl-after-finished-controller 2023-03-18T12:49:28Z system:controller:ttl-controller 2023-03-18T12:49:28Z system:discovery 2023-03-18T12:49:27Z system:heapster 2023-03-18T12:49:27Z system:kube-aggregator 2023-03-18T12:49:28Z system:kube-controller-manager 2023-03-18T12:49:28Z system:kube-dns 2023-03-18T12:49:28Z system:kube-scheduler 2023-03-18T12:49:28Z system:kubelet-api-admin 2023-03-18T12:49:27Z system:monitoring 2023-03-18T12:49:27Z system:node 2023-03-18T12:49:27Z system:node-bootstrapper 2023-03-18T12:49:28Z system:node-problem-detector 2023-03-18T12:49:27Z system:node-proxier 2023-03-18T12:49:28Z system:persistent-volume-provisioner 2023-03-18T12:49:28Z system:public-info-viewer 2023-03-18T12:49:27Z system:service-account-issuer-discovery 2023-03-18T12:49:28Z system:volume-scheduler 2023-03-18T12:49:28Z url-reader 2023-03-18T12:50:18Z view 2023-03-18T12:49:27Z has:/v1/clusterroles?continue= Successful (Bmessage:I0318 12:52:46.360830 35991 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:52:46.365505 35991 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:52:46.375552 35991 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500 200 OK in 5 milliseconds NAME CREATED AT admin 2023-03-18T12:49:27Z aggregation-reader 2023-03-18T12:50:18Z cluster-admin 2023-03-18T12:49:27Z edit 2023-03-18T12:49:27Z pod-admin 2023-03-18T12:50:17Z resource-reader 2023-03-18T12:50:18Z resourcename-reader 2023-03-18T12:50:18Z system:aggregate-to-admin 2023-03-18T12:49:27Z system:aggregate-to-edit 2023-03-18T12:49:27Z system:aggregate-to-view 2023-03-18T12:49:27Z system:auth-delegator 2023-03-18T12:49:28Z system:basic-user 2023-03-18T12:49:27Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2023-03-18T12:49:28Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2023-03-18T12:49:28Z system:certificates.k8s.io:kube-apiserver-client-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:kubelet-serving-approver 2023-03-18T12:49:28Z system:certificates.k8s.io:legacy-unknown-approver 2023-03-18T12:49:28Z system:controller:attachdetach-controller 2023-03-18T12:49:28Z system:controller:certificate-controller 2023-03-18T12:49:28Z system:controller:clusterrole-aggregation-controller 2023-03-18T12:49:28Z system:controller:cronjob-controller 2023-03-18T12:49:28Z system:controller:daemon-set-controller 2023-03-18T12:49:28Z system:controller:deployment-controller 2023-03-18T12:49:28Z system:controller:disruption-controller 2023-03-18T12:49:28Z system:controller:endpoint-controller 2023-03-18T12:49:28Z system:controller:endpointslice-controller 2023-03-18T12:49:28Z system:controller:endpointslicemirroring-controller 2023-03-18T12:49:28Z system:controller:ephemeral-volume-controller 2023-03-18T12:49:28Z system:controller:expand-controller 2023-03-18T12:49:28Z system:controller:generic-garbage-collector 2023-03-18T12:49:28Z system:controller:horizontal-pod-autoscaler 2023-03-18T12:49:28Z system:controller:job-controller 2023-03-18T12:49:28Z system:controller:namespace-controller 2023-03-18T12:49:28Z system:controller:node-controller 2023-03-18T12:49:28Z system:controller:persistent-volume-binder 2023-03-18T12:49:28Z system:controller:pod-garbage-collector 2023-03-18T12:49:28Z system:controller:pv-protection-controller 2023-03-18T12:49:28Z system:controller:pvc-protection-controller 2023-03-18T12:49:28Z system:controller:replicaset-controller 2023-03-18T12:49:28Z system:controller:replication-controller 2023-03-18T12:49:28Z system:controller:resourcequota-controller 2023-03-18T12:49:28Z system:controller:root-ca-cert-publisher 2023-03-18T12:49:28Z system:controller:route-controller 2023-03-18T12:49:28Z system:controller:service-account-controller 2023-03-18T12:49:28Z system:controller:service-controller 2023-03-18T12:49:28Z system:controller:statefulset-controller 2023-03-18T12:49:28Z system:controller:ttl-after-finished-controller 2023-03-18T12:49:28Z system:controller:ttl-controller 2023-03-18T12:49:28Z system:discovery 2023-03-18T12:49:27Z system:heapster 2023-03-18T12:49:27Z system:kube-aggregator 2023-03-18T12:49:28Z system:kube-controller-manager 2023-03-18T12:49:28Z system:kube-dns 2023-03-18T12:49:28Z system:kube-scheduler 2023-03-18T12:49:28Z system:kubelet-api-admin 2023-03-18T12:49:27Z system:monitoring 2023-03-18T12:49:27Z system:node 2023-03-18T12:49:27Z system:node-bootstrapper 2023-03-18T12:49:28Z system:node-problem-detector 2023-03-18T12:49:27Z system:node-proxier 2023-03-18T12:49:28Z system:persistent-volume-provisioner 2023-03-18T12:49:28Z system:public-info-viewer 2023-03-18T12:49:27Z system:service-account-issuer-discovery 2023-03-18T12:49:28Z system:volume-scheduler 2023-03-18T12:49:28Z url-reader 2023-03-18T12:50:18Z view 2023-03-18T12:49:27Z has:/clusterroles?limit=500 200 OK I0318 12:52:49.344172 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="nsbprune" Successful (Bmessage:default Active 3m20s kube-node-lease Active 3m20s kube-public Active 3m20s kube-system Active 3m20s namespace-1679143811-20240 Active 2m35s namespace-1679143811-30125 Active 2m35s namespace-1679143812-6002 Active 2m34s namespace-1679143814-29476 Active 2m32s namespace-1679143817-32530 Active 2m29s namespace-1679143823-31568 Active 2m23s namespace-1679143826-1679 Active 2m20s namespace-1679143829-13516 Active 2m17s namespace-1679143834-32323 Active 2m13s namespace-1679143837-15778 Active 2m10s namespace-1679143838-25271 Active 2m9s namespace-1679143839-21860 Active 2m8s namespace-1679143849-30637 Active 118s namespace-1679143850-3110 Active 118s namespace-1679143862-28451 Active 106s namespace-1679143863-8287 Active 105s namespace-1679143865-5175 Active 103s namespace-1679143866-13326 Active 102s namespace-1679143866-14796 Active 103s namespace-1679143869-10982 Active 100s namespace-1679143869-2145 Active 100s namespace-1679143919-30781 Active 50s namespace-1679143927-2947 Active 42s namespace-1679143928-31748 Active 42s namespace-1679143929-14049 Active 41s namespace-1679143944-21585 Active 26s namespace-1679143964-22608 Active 6s namespace-1679143964-25511 Active 6s has:default Successful (Bmessage:default Active 3m20s kube-node-lease Active 3m20s kube-public Active 3m20s kube-system Active 3m20s namespace-1679143811-20240 Active 2m35s namespace-1679143811-30125 Active 2m35s namespace-1679143812-6002 Active 2m34s namespace-1679143814-29476 Active 2m32s namespace-1679143817-32530 Active 2m29s namespace-1679143823-31568 Active 2m23s namespace-1679143826-1679 Active 2m20s namespace-1679143829-13516 Active 2m17s namespace-1679143834-32323 Active 2m13s namespace-1679143837-15778 Active 2m10s namespace-1679143838-25271 Active 2m9s namespace-1679143839-21860 Active 2m8s namespace-1679143849-30637 Active 118s namespace-1679143850-3110 Active 118s namespace-1679143862-28451 Active 106s namespace-1679143863-8287 Active 105s namespace-1679143865-5175 Active 103s namespace-1679143866-13326 Active 102s namespace-1679143866-14796 Active 103s namespace-1679143869-10982 Active 100s namespace-1679143869-2145 Active 100s namespace-1679143919-30781 Active 50s namespace-1679143927-2947 Active 42s namespace-1679143928-31748 Active 42s namespace-1679143929-14049 Active 41s namespace-1679143944-21585 Active 26s namespace-1679143964-22608 Active 6s namespace-1679143964-25511 Active 6s has:kube-public Successful (Bmessage:default Active 3m20s kube-node-lease Active 3m20s kube-public Active 3m20s kube-system Active 3m20s namespace-1679143811-20240 Active 2m35s namespace-1679143811-30125 Active 2m35s namespace-1679143812-6002 Active 2m34s namespace-1679143814-29476 Active 2m32s namespace-1679143817-32530 Active 2m29s namespace-1679143823-31568 Active 2m23s namespace-1679143826-1679 Active 2m20s namespace-1679143829-13516 Active 2m17s namespace-1679143834-32323 Active 2m13s namespace-1679143837-15778 Active 2m10s namespace-1679143838-25271 Active 2m9s namespace-1679143839-21860 Active 2m8s namespace-1679143849-30637 Active 118s namespace-1679143850-3110 Active 118s namespace-1679143862-28451 Active 106s namespace-1679143863-8287 Active 105s namespace-1679143865-5175 Active 103s namespace-1679143866-13326 Active 102s namespace-1679143866-14796 Active 103s namespace-1679143869-10982 Active 100s namespace-1679143869-2145 Active 100s namespace-1679143919-30781 Active 50s namespace-1679143927-2947 Active 42s namespace-1679143928-31748 Active 42s namespace-1679143929-14049 Active 41s namespace-1679143944-21585 Active 26s namespace-1679143964-22608 Active 6s namespace-1679143964-25511 Active 6s has:kube-system get.sh:137: Successful get configmaps {{range.items}}{{ if eq .metadata.name "one" }}found{{end}}{{end}}:: : (Bget.sh:138: Successful get configmaps {{range.items}}{{ if eq .metadata.name "two" }}found{{end}}{{end}}:: : (Bget.sh:139: Successful get configmaps {{range.items}}{{ if eq .metadata.name "three" }}found{{end}}{{end}}:: : (Bconfigmap/one created configmap/two created configmap/three created Successful (Bmessage:NAME DATA AGE kube-root-ca.crt 1 7s one 0 0s three 0 0s two 0 0s has not:watch is only supported on individual resources Successful (Bmessage: has not:watch is only supported on individual resources +++ [0318 12:52:53] Creating namespace namespace-1679143973-26352 namespace/namespace-1679143973-26352 created Context "test" modified. get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2023-03-18T12:52:53Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1679143973-26352", "resourceVersion": "1133", "uid": "25f898a9-6309-4c4d-8068-f2787a3f4615" }, "spec": { "containers": [ { "image": "registry.k8s.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template: template was: {.missing} object given to jsonpath engine was: map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2023-03-18T12:52:53Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2023-03-18T12:52:53Z"}}, "name":"valid-pod", "namespace":"namespace-1679143973-26352", "resourceVersion":"1133", "uid":"25f898a9-6309-4c4d-8068-f2787a3f4615"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"registry.k8s.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}} has:missing is not found error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing" Successful (Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template: template was: {{.missing}} raw data was: {"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2023-03-18T12:52:53Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2023-03-18T12:52:53Z"}],"name":"valid-pod","namespace":"namespace-1679143973-26352","resourceVersion":"1133","uid":"25f898a9-6309-4c4d-8068-f2787a3f4615"},"spec":{"containers":[{"image":"registry.k8s.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}} object given to template engine was: map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2023-03-18T12:52:53Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2023-03-18T12:52:53Z]] name:valid-pod namespace:namespace-1679143973-26352 resourceVersion:1133 uid:25f898a9-6309-4c4d-8068-f2787a3f4615] spec:map[containers:[map[image:registry.k8s.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]] has:map has no entry for key "missing" Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:valid-pod Successful (Bmessage:Error from server (NotFound): the server could not find the requested resource has:the server could not find the requested resource Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:STATUS Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:valid-pod Successful (Bmessage:pod/valid-pod has not:STATUS Successful (Bmessage:pod/valid-pod has:pod/valid-pod Successful (Bmessage:apiVersion: v1 kind: Pod metadata: creationTimestamp: "2023-03-18T12:52:53Z" labels: name: valid-pod name: valid-pod namespace: namespace-1679143973-26352 resourceVersion: "1133" uid: 25f898a9-6309-4c4d-8068-f2787a3f4615 spec: containers: - image: registry.k8s.io/serve_hostname imagePullPolicy: Always name: kubernetes-serve-hostname resources: limits: cpu: "1" memory: 512Mi requests: cpu: "1" memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: phase: Pending qosClass: Guaranteed has not:STATUS Successful (Bmessage:apiVersion: v1 kind: Pod metadata: creationTimestamp: "2023-03-18T12:52:53Z" labels: name: valid-pod name: valid-pod namespace: namespace-1679143973-26352 resourceVersion: "1133" uid: 25f898a9-6309-4c4d-8068-f2787a3f4615 spec: containers: - image: registry.k8s.io/serve_hostname imagePullPolicy: Always name: kubernetes-serve-hostname resources: limits: cpu: "1" memory: 512Mi requests: cpu: "1" memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: phase: Pending qosClass: Guaranteed has:name: valid-pod Successful (Bmessage:Error from server (NotFound): pods "invalid-pod" not found has:"invalid-pod" not found pod "valid-pod" deleted get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/redis-master created pod/valid-pod created Successful (Bmessage:redis-master valid-pod has:redis-master valid-pod pod "redis-master" deleted pod "valid-pod" deleted get.sh:218: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:219: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:220: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bconfigmap/test-the-map created I0318 12:52:58.785195 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143973-26352/test-the-service" clusterIPs=map[IPv4:10.0.0.193] service/test-the-service created deployment.apps/test-the-deployment created I0318 12:52:58.882627 23056 event.go:307] "Event occurred" object="namespace-1679143973-26352/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6ccf78d7dd to 3" I0318 12:52:58.901195 23056 event.go:307] "Event occurred" object="namespace-1679143973-26352/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-lcn7l" I0318 12:52:58.921116 23056 event.go:307] "Event occurred" object="namespace-1679143973-26352/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-zc4r6" I0318 12:52:58.921949 23056 event.go:307] "Event occurred" object="namespace-1679143973-26352/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-tcz5t" Successful (Bmessage:test-the-map test-the-service test-the-deployment has:test-the-map Successful (Bmessage:test-the-map test-the-service test-the-deployment has:test-the-deployment Successful (Bmessage:test-the-map test-the-service test-the-deployment has:test-the-service configmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted get.sh:235: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:236: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:237: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_kubectl_help_tests Running command: run_kubectl_help_tests +++ Running case: test-cmd.run_kubectl_help_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_help_tests Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:Modify kubeconfig files Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Starte ein bestimmtes Image auf dem Cluster set Setze bestimmte Features auf Objekten Basic Commands (Intermediate): explain Get documentation for a resource get Zeige eine oder mehrere Resourcen edit Bearbeite eine Resource auf dem Server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Verändere Certificate-Resources cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Markiere Knoten als unschedulable uncordon Markiere Knoten als schedulable drain Leere Knoten, um eine Wartung vorzubereiten taint Aktualisiere die Taints auf einem oder mehreren Knoten Troubleshooting and Debugging Commands: describe Zeige Details zu einer bestimmten Resource oder Gruppe von Resourcen logs Schreibt die Logs für einen Container in einem Pod attach Weise einem laufenden Container zu exec Führe einen Befehl im Container aus port-forward Leite einen oder mehrere lokale Ports an einen Pod weiter proxy Starte einen Proxy zum Kubernetes-API-Server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Aktualisiere die Labels auf einer Resource annotate Aktualisiere die Annotationen auf einer Resource completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Verändere kubeconfig Dateien plugin Provides utilities for interacting with plugins version Schreibt die Client- und Server-Versionsinformation Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:Verändere kubeconfig Dateien Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:Modify kubeconfig files Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Update the labels on a resource annotate Mettre à jour les annotations d'une ressource completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modifier des fichiers kubeconfig plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:Modifier des fichiers kubeconfig Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Esegui una particolare immagine nel cluster set Imposta caratteristiche specifiche sugli oggetti Basic Commands (Intermediate): explain Get documentation for a resource get Visualizza una o più risorse edit Modificare una risorsa sul server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modificare le risorse del certificato. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Contrassegnare il nodo come non programmabile uncordon Contrassegnare il nodo come programmabile drain Drain node in preparazione alla manutenzione taint Aggiorna i taints su uno o più nodi Troubleshooting and Debugging Commands: describe Mostra i dettagli di una specifica risorsa o un gruppo di risorse logs Stampa i log per container in un pod attach Collega a un container in esecuzione exec Esegui un comando in un contenitore port-forward Inoltra una o più porte locali a un pod proxy Eseguire un proxy al server Kubernetes API cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Aggiorna label di una risorsa annotate Aggiorna annotazioni di risorsa completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modifica i file kubeconfig plugin Provides utilities for interacting with plugins version Stampa per client e server le informazioni sulla versione Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:Modifica i file kubeconfig Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get 1つまたは複数のリソースを表示する edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label リソースのラベルを更新する annotate リソースのアノテーションを更新する completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config kubeconfigを変更する plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:kubeconfigを変更する Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Update the labels on a resource annotate 자원에 대한 주석을 업데이트합니다 completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config kubeconfig 파일을 수정합니다 plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:kubeconfig 파일을 수정합니다 Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Executa uma imagem específica no cluster set Define funcionalidades específicas em objetos Basic Commands (Intermediate): explain Get documentation for a resource get Mostra um ou mais recursos edit Edita um recurso no servidor delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Edita o certificado dos recursos. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Marca o node como não agendável uncordon Marca o node como agendável drain Drenar o node para preparação de manutenção taint Atualizar o taints de um ou mais nodes Troubleshooting and Debugging Commands: describe Mostra os detalhes de um recurso específico ou de um grupo de recursos logs Mostra os logs de um container em um pod attach Se conecta a um container em execução exec Executa um comando em um container port-forward Encaminhar uma ou mais portas locais para um pod proxy Executa um proxy para o servidor de API do Kubernetes cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Atualizar os labels de um recurso annotate Atualizar as anotações de um recurso completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Edita o arquivo kubeconfig plugin Provides utilities for interacting with plugins version Mostra a informação de versão do cliente e do servidor Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:Edita o arquivo kubeconfig Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run 在集群上运行特定镜像 set 为对象设置指定特性 Basic Commands (Intermediate): explain Get documentation for a resource get 显示一个或多个资源 edit 编辑服务器上的资源 delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate 修改证书资源。 cluster-info Display cluster information top Display resource (CPU/memory) usage cordon 标记节点为不可调度 uncordon 标记节点为可调度 drain 清空节点以准备维护 taint 更新一个或者多个节点上的污点 Troubleshooting and Debugging Commands: describe 显示特定资源或资源组的详细信息 logs 打印 Pod 中容器的日志 attach 挂接到一个运行中的容器 exec 在某个容器中执行一个命令 port-forward 将一个或多个本地端口转发到某个 Pod proxy 运行一个指向 Kubernetes API 服务器的代理 cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label 更新某资源上的标签 annotate 更新一个资源的注解 completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config 修改 kubeconfig 文件 plugin Provides utilities for interacting with plugins version 输出客户端和服务端的版本信息 Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:修改 kubeconfig 文件 Successful (Bmessage:kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes events List events Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL Settings Commands: label Update the labels on a resource annotate 更新一個資源的注解(annotations) completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Other Commands: api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config 修改 kubeconfig 檔案 plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). has:修改 kubeconfig 檔案 Successful (Bmessage:Mark node as schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Mark node as schedulable. Successful (Bmessage:Markiere Knoten als schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Markiere Knoten als schedulable. Successful (Bmessage:Mark node as schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Mark node as schedulable. Successful (Bmessage:Mark node as schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Mark node as schedulable. Successful (Bmessage:Contrassegna il nodo come programmabile. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Contrassegna il nodo come programmabile. Successful (Bmessage:Mark node as schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Mark node as schedulable. Successful (Bmessage:Mark node as schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Mark node as schedulable. Successful (Bmessage:Remove a restrição de execução de workloads no node. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Remove a restrição de execução de workloads no node. Successful (Bmessage:标记节点为可调度。 Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:标记节点为可调度。 Successful (Bmessage:Mark node as schedulable. Examples: # Mark node "foo" as schedulable kubectl uncordon foo Options: --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. Usage: kubectl uncordon NODE [options] Use "kubectl options" for a list of global command-line options (applies to all commands). has:Mark node as schedulable. +++ exit code: 0 Recording: run_kubectl_events_tests Running command: run_kubectl_events_tests +++ Running case: test-cmd.run_kubectl_events_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_events_tests +++ [0318 12:53:00] Creating namespace namespace-1679143980-188 namespace/namespace-1679143980-188 created Context "test" modified. +++ [0318 12:53:00] Testing kubectl events events.sh:31: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-events" }}found{{end}}{{end}}:: : (Bnamespace/test-events created events.sh:35: Successful get namespaces/test-events {{.metadata.name}}: test-events (BSuccessful (Bmessage:NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE kube-system 2m52s Normal LeaderElection Lease/kube-controller-manager 8ae82a97-c58a-11ed-8f15-da574695a788_9275f895-47b9-485c-9895-98ac9c1b7894 became leader default 2m46s Normal RegisteredNode Node/127.0.0.1 Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller namespace-1679143850-3110 2m6s Normal SuccessfulCreate ReplicationController/modified Created pod: modified-4bb47 namespace-1679143850-3110 2m5s Normal SuccessfulCreate ReplicationController/modified Created pod: modified-t5m68 namespace-1679143866-14796 112s Normal SuccessfulCreate ReplicationController/frontend Created pod: frontend-xhkwl namespace-1679143866-14796 112s Normal SuccessfulCreate ReplicationController/frontend Created pod: frontend-6sg2h namespace-1679143866-14796 112s Normal SuccessfulCreate ReplicationController/frontend Created pod: frontend-nwphc namespace-1679143869-2145 109s Normal ScalingReplicaSet Deployment/test-deployment-retainkeys Scaled up replica set test-deployment-retainkeys-6c5b6478cd to 1 namespace-1679143869-2145 109s Normal SuccessfulCreate ReplicaSet/test-deployment-retainkeys-6c5b6478cd Created pod: test-deployment-retainkeys-6c5b6478cd-vnlgc namespace-1679143869-2145 108s Normal SuccessfulDelete ReplicaSet/test-deployment-retainkeys-6c5b6478cd Deleted pod: test-deployment-retainkeys-6c5b6478cd-vnlgc namespace-1679143869-2145 108s Normal SuccessfulCreate ReplicaSet/test-deployment-retainkeys-d65c44c97 Created pod: test-deployment-retainkeys-d65c44c97-lrwff namespace-1679143869-2145 108s Normal ScalingReplicaSet Deployment/test-deployment-retainkeys Scaled down replica set test-deployment-retainkeys-6c5b6478cd to 0 from 1 namespace-1679143869-2145 108s Normal ScalingReplicaSet Deployment/test-deployment-retainkeys Scaled up replica set test-deployment-retainkeys-d65c44c97 to 1 namespace-1679143869-2145 87s (x3 over 89s) Normal FailedBinding PersistentVolumeClaim/a-pvc no persistent volumes available for this claim and no storage class is set namespace-1679143869-2145 86s (x3 over 87s) Normal FailedBinding PersistentVolumeClaim/b-pvc no persistent volumes available for this claim and no storage class is set namespace-1679143869-2145 73s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-gm7dl namespace-1679143869-2145 73s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-cdmfc namespace-1679143869-2145 73s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-bv7dh namespace-1679143869-2145 73s Normal ScalingReplicaSet Deployment/test-the-deployment Scaled up replica set test-the-deployment-6ccf78d7dd to 3 namespace-1679143869-2145 72s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-hfnzs namespace-1679143869-2145 72s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-kzghs namespace-1679143869-2145 72s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-9gtp5 namespace-1679143869-2145 72s Normal ScalingReplicaSet Deployment/test-the-deployment Scaled up replica set test-the-deployment-6ccf78d7dd to 3 namespace-1679143929-14049 51s Normal SuccessfulCreate ReplicaSet/my-depl-bfb57d6df Created pod: my-depl-bfb57d6df-jp424 namespace-1679143929-14049 51s Normal ScalingReplicaSet Deployment/my-depl Scaled up replica set my-depl-bfb57d6df to 1 namespace-1679143929-14049 49s Normal SuccessfulCreate ReplicaSet/nginx-5645b79496 Created pod: nginx-5645b79496-zthv9 namespace-1679143929-14049 49s Normal SuccessfulCreate ReplicaSet/nginx-5645b79496 Created pod: nginx-5645b79496-kq95p namespace-1679143929-14049 49s Normal SuccessfulCreate ReplicaSet/nginx-5645b79496 Created pod: nginx-5645b79496-5dmr5 namespace-1679143929-14049 49s Normal ScalingReplicaSet Deployment/nginx Scaled up replica set nginx-5645b79496 to 3 namespace-1679143929-14049 41s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-qwd46 namespace-1679143929-14049 41s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-mw89l namespace-1679143929-14049 41s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-88f4n namespace-1679143929-14049 41s Normal ScalingReplicaSet Deployment/nginx Scaled up replica set nginx-5675dfc785 to 3 namespace-1679143929-14049 36s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-qf9sq namespace-1679143929-14049 36s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-4w6hk namespace-1679143929-14049 36s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-lbg8l namespace-1679143929-14049 36s Normal ScalingReplicaSet Deployment/nginx Scaled up replica set nginx-5675dfc785 to 3 namespace-1679143973-26352 2s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-lcn7l namespace-1679143973-26352 2s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-zc4r6 namespace-1679143973-26352 2s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-tcz5t namespace-1679143973-26352 2s Normal ScalingReplicaSet Deployment/test-the-deployment Scaled up replica set test-the-deployment-6ccf78d7dd to 3 has not:Warning events.sh:42: Successful get cronjob --namespace=test-events {{range.items}}{{ if eq .metadata.name "pi" }}found{{end}}{{end}}:: : (BI0318 12:53:00.738372 19996 controller.go:624] quota admission added evaluator for: cronjobs.batch I0318 12:53:00.751565 23056 event.go:307] "Event occurred" object="test-events/pi" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Warning" reason="InvalidSchedule" message="invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second" cronjob.batch/pi created events.sh:46: Successful get cronjob/pi --namespace=test-events {{.metadata.name}}: pi (BSuccessful (Bmessage:NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE kube-system 2m52s Normal LeaderElection Lease/kube-controller-manager 8ae82a97-c58a-11ed-8f15-da574695a788_9275f895-47b9-485c-9895-98ac9c1b7894 became leader default 2m46s Normal RegisteredNode Node/127.0.0.1 Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller namespace-1679143850-3110 2m6s Normal SuccessfulCreate ReplicationController/modified Created pod: modified-4bb47 namespace-1679143850-3110 2m5s Normal SuccessfulCreate ReplicationController/modified Created pod: modified-t5m68 namespace-1679143866-14796 112s Normal SuccessfulCreate ReplicationController/frontend Created pod: frontend-xhkwl namespace-1679143866-14796 112s Normal SuccessfulCreate ReplicationController/frontend Created pod: frontend-6sg2h namespace-1679143866-14796 112s Normal SuccessfulCreate ReplicationController/frontend Created pod: frontend-nwphc namespace-1679143869-2145 109s Normal ScalingReplicaSet Deployment/test-deployment-retainkeys Scaled up replica set test-deployment-retainkeys-6c5b6478cd to 1 namespace-1679143869-2145 109s Normal SuccessfulCreate ReplicaSet/test-deployment-retainkeys-6c5b6478cd Created pod: test-deployment-retainkeys-6c5b6478cd-vnlgc namespace-1679143869-2145 108s Normal SuccessfulDelete ReplicaSet/test-deployment-retainkeys-6c5b6478cd Deleted pod: test-deployment-retainkeys-6c5b6478cd-vnlgc namespace-1679143869-2145 108s Normal SuccessfulCreate ReplicaSet/test-deployment-retainkeys-d65c44c97 Created pod: test-deployment-retainkeys-d65c44c97-lrwff namespace-1679143869-2145 108s Normal ScalingReplicaSet Deployment/test-deployment-retainkeys Scaled down replica set test-deployment-retainkeys-6c5b6478cd to 0 from 1 namespace-1679143869-2145 108s Normal ScalingReplicaSet Deployment/test-deployment-retainkeys Scaled up replica set test-deployment-retainkeys-d65c44c97 to 1 namespace-1679143869-2145 87s (x3 over 89s) Normal FailedBinding PersistentVolumeClaim/a-pvc no persistent volumes available for this claim and no storage class is set namespace-1679143869-2145 86s (x3 over 87s) Normal FailedBinding PersistentVolumeClaim/b-pvc no persistent volumes available for this claim and no storage class is set namespace-1679143869-2145 73s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-gm7dl namespace-1679143869-2145 73s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-cdmfc namespace-1679143869-2145 73s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-bv7dh namespace-1679143869-2145 73s Normal ScalingReplicaSet Deployment/test-the-deployment Scaled up replica set test-the-deployment-6ccf78d7dd to 3 namespace-1679143869-2145 72s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-hfnzs namespace-1679143869-2145 72s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-kzghs namespace-1679143869-2145 72s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-9gtp5 namespace-1679143869-2145 72s Normal ScalingReplicaSet Deployment/test-the-deployment Scaled up replica set test-the-deployment-6ccf78d7dd to 3 namespace-1679143929-14049 51s Normal SuccessfulCreate ReplicaSet/my-depl-bfb57d6df Created pod: my-depl-bfb57d6df-jp424 namespace-1679143929-14049 51s Normal ScalingReplicaSet Deployment/my-depl Scaled up replica set my-depl-bfb57d6df to 1 namespace-1679143929-14049 49s Normal SuccessfulCreate ReplicaSet/nginx-5645b79496 Created pod: nginx-5645b79496-zthv9 namespace-1679143929-14049 49s Normal SuccessfulCreate ReplicaSet/nginx-5645b79496 Created pod: nginx-5645b79496-kq95p namespace-1679143929-14049 49s Normal SuccessfulCreate ReplicaSet/nginx-5645b79496 Created pod: nginx-5645b79496-5dmr5 namespace-1679143929-14049 49s Normal ScalingReplicaSet Deployment/nginx Scaled up replica set nginx-5645b79496 to 3 namespace-1679143929-14049 41s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-qwd46 namespace-1679143929-14049 41s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-mw89l namespace-1679143929-14049 41s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-88f4n namespace-1679143929-14049 41s Normal ScalingReplicaSet Deployment/nginx Scaled up replica set nginx-5675dfc785 to 3 namespace-1679143929-14049 36s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-qf9sq namespace-1679143929-14049 36s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-4w6hk namespace-1679143929-14049 36s Normal SuccessfulCreate ReplicaSet/nginx-5675dfc785 Created pod: nginx-5675dfc785-lbg8l namespace-1679143929-14049 36s Normal ScalingReplicaSet Deployment/nginx Scaled up replica set nginx-5675dfc785 to 3 namespace-1679143973-26352 2s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-lcn7l namespace-1679143973-26352 2s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-zc4r6 namespace-1679143973-26352 2s Normal SuccessfulCreate ReplicaSet/test-the-deployment-6ccf78d7dd Created pod: test-the-deployment-6ccf78d7dd-tcz5t namespace-1679143973-26352 2s Normal ScalingReplicaSet Deployment/test-the-deployment Scaled up replica set test-the-deployment-6ccf78d7dd to 3 test-events 0s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has:Warning Successful (Bmessage:LAST SEEN TYPE REASON OBJECT MESSAGE 0s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has:Warning Successful (Bmessage:LAST SEEN TYPE REASON OBJECT MESSAGE 0s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has:Warning Successful (Bmessage:LAST SEEN TYPE REASON OBJECT MESSAGE 1s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has:Warning Successful (Bmessage:LAST SEEN TYPE REASON OBJECT MESSAGE 2s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has:Warning Successful (Bmessage:No events found in test-events namespace. has not:Warning Successful (Bmessage:2s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has not:LAST SEEN Successful (Bmessage:2s Warning InvalidSchedule CronJob/pi invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second has:Warning Successful (Bmessage:{ "kind": "EventList", "apiVersion": "v1", "metadata": {}, "items": [ { "kind": "Event", "apiVersion": "v1", "metadata": { "name": "pi.174d848dd36dddb2", "namespace": "test-events", "uid": "81bdf2f7-060c-42cf-9f23-1644d6d4108b", "resourceVersion": "1188", "creationTimestamp": "2023-03-18T12:53:00Z" }, "involvedObject": { "kind": "CronJob", "namespace": "test-events", "name": "pi", "uid": "8d09a720-4e64-4b51-ae19-6c6b6fba9132", "apiVersion": "batch/v1", "resourceVersion": "1187" }, "reason": "InvalidSchedule", "message": "invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second", "source": { "component": "cronjob-controller" }, "firstTimestamp": "2023-03-18T12:53:00Z", "lastTimestamp": "2023-03-18T12:53:00Z", "count": 1, "type": "Warning", "eventTime": null, "reportingComponent": "", "reportingInstance": "" } ] } has:Warning Successful (Bmessage:apiVersion: v1 items: - apiVersion: v1 count: 1 eventTime: null firstTimestamp: "2023-03-18T12:53:00Z" involvedObject: apiVersion: batch/v1 kind: CronJob name: pi namespace: test-events resourceVersion: "1187" uid: 8d09a720-4e64-4b51-ae19-6c6b6fba9132 kind: Event lastTimestamp: "2023-03-18T12:53:00Z" message: 'invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second' metadata: creationTimestamp: "2023-03-18T12:53:00Z" name: pi.174d848dd36dddb2 namespace: test-events resourceVersion: "1188" uid: 81bdf2f7-060c-42cf-9f23-1644d6d4108b reason: InvalidSchedule reportingComponent: "" reportingInstance: "" source: component: cronjob-controller type: Warning kind: EventList metadata: {} has:Warning cronjob.batch "pi" deleted namespace "test-events" deleted +++ exit code: 0 Recording: run_kubectl_exec_pod_tests Running command: run_kubectl_exec_pod_tests +++ Running case: test-cmd.run_kubectl_exec_pod_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_exec_pod_tests +++ [0318 12:53:07] Creating namespace namespace-1679143987-30554 namespace/namespace-1679143987-30554 created Context "test" modified. +++ [0318 12:53:07] Testing kubectl exec POD COMMAND Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): pods "abc" not found has:pods "abc" not found Successful (Bmessage:error: cannot exec into multiple objects at a time has:cannot exec into multiple objects at a time pod/test-pod created Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:pods "test-pod" not found Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:pod or type/name must be specified pod "test-pod" deleted +++ exit code: 0 Recording: run_kubectl_exec_resource_name_tests Running command: run_kubectl_exec_resource_name_tests +++ Running case: test-cmd.run_kubectl_exec_resource_name_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_exec_resource_name_tests +++ [0318 12:53:08] Creating namespace namespace-1679143988-27947 namespace/namespace-1679143988-27947 created Context "test" modified. +++ [0318 12:53:08] Testing kubectl exec TYPE/NAME COMMAND Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: the server doesn't have a resource type "foo" has:error: Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): deployments.apps "bar" not found has:"bar" not found pod/test-pod created replicaset.apps/frontend created I0318 12:53:09.108700 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-d9zkb" I0318 12:53:09.126437 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-9fmth" I0318 12:53:09.126473 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-9qpnr" configmap/test-set-env-config created Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented has:not implemented Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:not found Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:pod, type/name or --filename must be specified Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod frontend-9fmth does not have a host assigned has not:not found Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod frontend-9fmth does not have a host assigned has not:pod, type/name or --filename must be specified pod "test-pod" deleted replicaset.apps "frontend" deleted configmap "test-set-env-config" deleted +++ exit code: 0 Recording: run_create_secret_tests Running command: run_create_secret_tests +++ Running case: test-cmd.run_create_secret_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_create_secret_tests Successful (Bmessage:Error from server (NotFound): secrets "mysecret" not found has:secrets "mysecret" not found Successful (Bmessage:user-specified has:user-specified Successful (Bmessage:Error from server (NotFound): secrets "mysecret" not found has:secrets "mysecret" not found Successful (B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"191f1525-b8b0-4c3e-ba0b-adeca0bf6302","resourceVersion":"1232","creationTimestamp":"2023-03-18T12:53:10Z"}} Successful (Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"191f1525-b8b0-4c3e-ba0b-adeca0bf6302","resourceVersion":"1233","creationTimestamp":"2023-03-18T12:53:10Z"},"data":{"key1":"config1"}} has:uid Successful (Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"191f1525-b8b0-4c3e-ba0b-adeca0bf6302","resourceVersion":"1233","creationTimestamp":"2023-03-18T12:53:10Z"},"data":{"key1":"config1"}} has:config1 {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"191f1525-b8b0-4c3e-ba0b-adeca0bf6302"}} Successful (Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found has:configmaps "tester-update-cm" not found +++ exit code: 0 Recording: run_kubectl_create_kustomization_directory_tests Running command: run_kubectl_create_kustomization_directory_tests +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_kustomization_directory_tests create.sh:126: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: (Bcreate.sh:127: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bcreate.sh:128: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bconfigmap/test-the-map created I0318 12:53:11.056066 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679143988-27947/test-the-service" clusterIPs=map[IPv4:10.0.0.64] service/test-the-service created deployment.apps/test-the-deployment created I0318 12:53:11.136914 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6ccf78d7dd to 3" I0318 12:53:11.167153 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-pw4gs" I0318 12:53:11.185242 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-dxp8q" I0318 12:53:11.185275 23056 event.go:307] "Event occurred" object="namespace-1679143988-27947/test-the-deployment-6ccf78d7dd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6ccf78d7dd-pl2jj" create.sh:134: Successful get configmap test-the-map {{.metadata.name}}: test-the-map (Bcreate.sh:135: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment (Bcreate.sh:136: Successful get service test-the-service {{.metadata.name}}: test-the-service (Bconfigmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted +++ exit code: 0 Recording: run_kubectl_create_validate_tests Running command: run_kubectl_create_validate_tests +++ Running case: test-cmd.run_kubectl_create_validate_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_validate_tests +++ [0318 12:53:11] Creating namespace namespace-1679143991-22601 namespace/namespace-1679143991-22601 created Context "test" modified. +++ [0318 12:53:11] Testing kubectl create --validate Successful message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo" has either:strict decoding error or:error validating data +++ [0318 12:53:11] Testing kubectl create --validate=true Successful message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo" has either:strict decoding error or:error validating data +++ [0318 12:53:12] Testing kubectl create --validate=false Successful (Bmessage:deployment.apps/invalid-nginx-deployment created has:deployment.apps/invalid-nginx-deployment created I0318 12:53:12.213001 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4" I0318 12:53:12.252113 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-jffd5" deployment.apps "invalid-nginx-deployment" deleted I0318 12:53:12.274342 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-wxr94" I0318 12:53:12.274375 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-7wgnh" +++ [0318 12:53:12] Testing kubectl create --validate=strict I0318 12:53:12.303341 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-l2k4c" E0318 12:53:12.336402 23056 replica_set.go:544] sync "namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" failed with replicasets.apps "invalid-nginx-deployment-cbdccf466" not found Successful message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo" has either:strict decoding error or:error validating data +++ [0318 12:53:12] Testing kubectl create --validate=warn I0318 12:53:12.657953 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="test-events" Warning: unknown field "spec.baz" Warning: unknown field "spec.foo" Successful (Bmessage:deployment.apps/invalid-nginx-deployment created has:deployment.apps/invalid-nginx-deployment created I0318 12:53:12.699498 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4" I0318 12:53:12.716240 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-vz6pr" I0318 12:53:12.732111 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-cxwd6" I0318 12:53:12.732140 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-b42k9" I0318 12:53:12.749900 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-t9l59" deployment.apps "invalid-nginx-deployment" deleted +++ [0318 12:53:12] Testing kubectl create --validate=ignore Successful (Bmessage:deployment.apps/invalid-nginx-deployment created has:deployment.apps/invalid-nginx-deployment created I0318 12:53:12.856838 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4" I0318 12:53:12.871607 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-f68m6" I0318 12:53:12.894421 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-svnvl" I0318 12:53:12.894458 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-f72m9" I0318 12:53:12.918286 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-fclvh" deployment.apps "invalid-nginx-deployment" deleted +++ [0318 12:53:12] Testing kubectl create E0318 12:53:12.970551 23056 replica_set.go:544] sync "namespace-1679143991-22601/invalid-nginx-deployment-cbdccf466" failed with replicasets.apps "invalid-nginx-deployment-cbdccf466" not found Successful message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo" has either:strict decoding error or:error validating data +++ [0318 12:53:13] Testing kubectl create --validate=foo Successful (Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false) has:invalid - validate option "foo" +++ exit code: 0 Recording: run_convert_tests Running command: run_convert_tests +++ Running case: test-cmd.run_convert_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_convert_tests convert.sh:27: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx created I0318 12:53:13.525995 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-77566b75db to 3" I0318 12:53:13.544936 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-77566b75db-57xlt" I0318 12:53:13.561343 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-77566b75db-5n2cx" I0318 12:53:13.561375 23056 event.go:307] "Event occurred" object="namespace-1679143991-22601/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-77566b75db-9qsc5" convert.sh:31: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx: (Bconvert.sh:32: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bconvert.sh:36: Successful get deployment nginx {{ .apiVersion }}: apps/v1 (BSuccessful (Bmessage:apiVersion: apps/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: name: nginx-undo name: nginx spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: name: nginx-undo strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: nginx-undo spec: containers: - image: registry.k8s.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: {} has:apps/v1beta1 deployment.apps "nginx" deleted Successful (Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing Successful (Bmessage:nginx: has:nginx: +++ exit code: 0 Recording: run_kubectl_delete_allnamespaces_tests Running command: run_kubectl_delete_allnamespaces_tests +++ Running case: test-cmd.run_kubectl_delete_allnamespaces_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_delete_allnamespaces_tests namespace/namespace-1679143994-6980 created namespace/namespace-1679143994-7385 created configmap/one created configmap/two created configmap/one labeled configmap/two labeled configmap "one" deleted (dry run) configmap "two" deleted (dry run) configmap "one" deleted (server dry run) configmap "two" deleted (server dry run) Context "test" modified. delete.sh:40: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: one: (BContext "test" modified. delete.sh:42: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: two: (Bconfigmap "one" deleted configmap "two" deleted Context "test" modified. delete.sh:48: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: (BContext "test" modified. delete.sh:50: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_kubectl_request_timeout_tests Running command: run_kubectl_request_timeout_tests +++ Running case: test-cmd.run_kubectl_request_timeout_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_request_timeout_tests +++ [0318 12:53:15] Testing kubectl request timeout +++ [0318 12:53:15] Creating namespace namespace-1679143995-4864 namespace/namespace-1679143995-4864 created Context "test" modified. request-timeout.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2023-03-18T12:53:15Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1679143995-4864", "resourceVersion": "1385", "uid": "92b01fd6-afe1-4bfb-bc9b-3346a063aaa6" }, "spec": { "containers": [ { "image": "registry.k8s.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } request-timeout.sh:34: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:valid-pod FAIL! (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s I0318 12:53:16.985539 38635 streamwatcher.go:114] Unable to decode an event from the watch stream: context deadline exceeded has not:Timeout 42 /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/request-timeout.sh !!! [0318 12:53:16] Call tree: !!! [0318 12:53:16] 1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_kubectl_request_timeout_tests(...) !!! [0318 12:53:16] 2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...) !!! [0318 12:53:17] 3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:141 juLog(...) !!! [0318 12:53:17] 4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:624 record_command(...) !!! [0318 12:53:17] 5: hack/make-rules/test-cmd.sh:194 runTests(...) +++ exit code: 1 +++ error: 1 Error when running run_kubectl_request_timeout_tests Recording: run_crd_tests Running command: run_crd_tests +++ Running case: test-cmd.run_crd_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_crd_tests +++ [0318 12:53:17] Creating namespace namespace-1679143997-5909 namespace/namespace-1679143997-5909 created Context "test" modified. +++ [0318 12:53:17] Testing kubectl crd customresourcedefinition.apiextensions.k8s.io/foos.company.com created I0318 12:53:17.411375 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager crd.sh:73: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name "foos.company.com"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com: (BI0318 12:53:17.635672 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io/bars.company.com created I0318 12:53:17.647055 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:17.658060 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager crd.sh:107: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name "foos.company.com" "bars.company.com"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com: (Bcustomresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created I0318 12:53:17.923169 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager crd.sh:146: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name "foos.company.com" "bars.company.com" "resources.mygroup.example.com"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com: (Bcustomresourcedefinition.apiextensions.k8s.io/validfoos.company.com created I0318 12:53:18.545250 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:18.563963 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:18.573860 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager crd.sh:188: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name "foos.company.com" "bars.company.com" "resources.mygroup.example.com" "validfoos.company.com"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com:validfoos.company.com: (B+++ [0318 12:53:18] Creating namespace namespace-1679143998-15319 namespace/namespace-1679143998-15319 created Context "test" modified. +++ [0318 12:53:18] Testing kubectl non-native resources {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"company.com/v1","resources":[{"name":"foos","singularName":"foo","namespaced":true,"kind":"Foo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"xIRtouR4Ix8="},{"name":"bars","singularName":"bar","namespaced":true,"kind":"Bar","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"5GMNuFRm/lM="},{"name":"validfoos","singularName":"validfoo","namespaced":true,"kind":"ValidFoo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"mHoViSBo05k="}]} {"apiVersion":"company.com/v1","items":[],"kind":"FooList","metadata":{"continue":"","resourceVersion":"1407"}} {"apiVersion":"company.com/v1","items":[],"kind":"BarList","metadata":{"continue":"","resourceVersion":"1407"}} crd.sh:233: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:236: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:239: Successful get resources {{range.items}}{{.metadata.name}}:{{end}}: (Bkind.mygroup.example.com/myobj created Successful (Bmessage:kind.mygroup.example.com/myobj has:kind.mygroup.example.com/myobj Successful (Bmessage:kind.mygroup.example.com/myobj has:kind.mygroup.example.com/myobj Successful (Bmessage:kind.mygroup.example.com/myobj has:kind.mygroup.example.com/myobj kind.mygroup.example.com "myobj" deleted crd.sh:258: Successful get resources {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:53:20.187758 19996 controller.go:624] quota admission added evaluator for: foos.company.com foo.company.com/test created foo.company.com/second-instance created crd.sh:265: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:268: Successful get foo {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:269: Successful get foos.company.com {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:270: Successful get foos.v1.company.com {{range.items}}{{.metadata.name}}:{{end}}: test: (B+++ [0318 12:53:20] Testing CustomResource printing NAME AGE test 0s NAME AGE test 0s foo.company.com/test foo.company.com/test NAME AGE test 0s NAME AGE test 1s { "apiVersion": "v1", "items": [ { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "creationTimestamp": "2023-03-18T12:53:20Z", "generation": 1, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1679143998-15319", "resourceVersion": "1411", "uid": "19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "someField": "field1" } ], "kind": "List", "metadata": { "resourceVersion": "" } } { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "creationTimestamp": "2023-03-18T12:53:20Z", "generation": 1, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1679143998-15319", "resourceVersion": "1411", "uid": "19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "someField": "field1" } apiVersion: v1 items: - apiVersion: company.com/v1 kind: Foo metadata: creationTimestamp: "2023-03-18T12:53:20Z" generation: 1 labels: pruneGroup: "true" name: test namespace: namespace-1679143998-15319 resourceVersion: "1411" uid: 19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9 nestedField: otherSubfield: subfield2 someSubfield: subfield1 otherField: field2 someField: field1 kind: List metadata: resourceVersion: "" apiVersion: company.com/v1 kind: Foo metadata: creationTimestamp: "2023-03-18T12:53:20Z" generation: 1 labels: pruneGroup: "true" name: test namespace: namespace-1679143998-15319 resourceVersion: "1411" uid: 19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9 nestedField: otherSubfield: subfield2 someSubfield: subfield1 otherField: field2 someField: field1 field1field1field1field1Successful (Bmessage:foo.company.com/test has:foo.company.com/test +++ [0318 12:53:21] Testing CustomResource patching foo.company.com/test patched crd.sh:294: Successful get foos/test {{.patched}}: value1 (BFlag --record has been deprecated, --record will be removed in the future foo.company.com/test patched crd.sh:296: Successful get foos/test {{.patched}}: value2 (BFlag --record has been deprecated, --record will be removed in the future foo.company.com/test patched crd.sh:298: Successful get foos/test {{.patched}}: (B+++ [0318 12:53:22] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "annotations": { "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true" }, "creationTimestamp": "2023-03-18T12:53:20Z", "generation": 4, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1679143998-15319", "resourceVersion": "1419", "uid": "19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "patched": "value3", "someField": "field1" } Flag --record has been deprecated, --record will be removed in the future { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "annotations": { "kubernetes.io/change-cause": "kubectl patch --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --filename=/tmp/tmp.JFDEKO8UeQ/crd-foos-test.json --patch={\"patched\":\"value3\"} --type=merge --output=json" }, "creationTimestamp": "2023-03-18T12:53:20Z", "generation": 5, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1679143998-15319", "resourceVersion": "1421", "uid": "19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "patched": "value3", "someField": "field1" } crd.sh:315: Successful get foos/test {{.patched}}: value3 (B+++ [0318 12:53:22] Testing CustomResource labeling foo.company.com/test labeled foo.company.com/test labeled foo.company.com/second-instance labeled foo.company.com/test labeled allnsLabel: "true" allnsLabel: "true" +++ [0318 12:53:22] Testing CustomResource annotating foo.company.com/test annotate foo.company.com/test annotate foo.company.com/second-instance annotate foo.company.com/test annotate allnsannotation: "true" allnsannotation: "true" +++ [0318 12:53:23] Testing CustomResource describing Name: test Namespace: namespace-1679143998-15319 Labels: allnsLabel=true itemlabel=true listlabel=true pruneGroup=true Annotations: allnsannotation: true itemannotation: true kubernetes.io/change-cause: kubectl patch --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --filename=/tmp/tm... listannotation: true API Version: company.com/v1 Kind: Foo Metadata: Creation Timestamp: 2023-03-18T12:53:20Z Generation: 5 Resource Version: 1430 UID: 19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9 Nested Field: Other Subfield: subfield2 Some Subfield: subfield1 Other Field: field2 Patched: value3 Some Field: field1 Events: Name: test Namespace: namespace-1679143998-15319 Labels: allnsLabel=true itemlabel=true listlabel=true pruneGroup=true Annotations: allnsannotation: true itemannotation: true kubernetes.io/change-cause: kubectl patch --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --filename=/tmp/tm... listannotation: true API Version: company.com/v1 Kind: Foo Metadata: Creation Timestamp: 2023-03-18T12:53:20Z Generation: 5 Resource Version: 1430 UID: 19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9 Nested Field: Other Subfield: subfield2 Some Subfield: subfield1 Other Field: field2 Patched: value3 Some Field: field1 Events: listlabel=true itemlabel=true query for customresourcedefinitions had limit param query for events had limit param query for customresourcedefinitions had user-specified limit param Successful describe customresourcedefinitions verbose logs: I0318 12:53:23.448487 39579 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:53:23.453367 39579 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:53:23.459797 39579 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?limit=500 200 OK in 2 milliseconds I0318 12:53:23.463042 39579 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/bars.company.com 200 OK in 1 milliseconds I0318 12:53:23.467339 39579 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dbars.company.com%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCustomResourceDefinition%2CinvolvedObject.uid%3Dcedaf9b0-eb70-403b-8aed-461acd9029b4&limit=500 200 OK in 4 milliseconds I0318 12:53:23.469687 39579 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/foos.company.com 200 OK in 1 milliseconds I0318 12:53:23.474233 39579 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.namespace%3D%2CinvolvedObject.kind%3DCustomResourceDefinition%2CinvolvedObject.uid%3D49e3ee35-fc03-4154-8da1-1d62988e9eb6%2CinvolvedObject.name%3Dfoos.company.com&limit=500 200 OK in 4 milliseconds I0318 12:53:23.476343 39579 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/resources.mygroup.example.com 200 OK in 1 milliseconds I0318 12:53:23.480929 39579 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.kind%3DCustomResourceDefinition%2CinvolvedObject.uid%3Dcf033026-b9b4-4593-80e5-fca80637dcbd%2CinvolvedObject.name%3Dresources.mygroup.example.com%2CinvolvedObject.namespace%3D&limit=500 200 OK in 4 milliseconds I0318 12:53:23.483378 39579 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/validfoos.company.com 200 OK in 1 milliseconds I0318 12:53:23.487747 39579 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.kind%3DCustomResourceDefinition%2CinvolvedObject.uid%3D14d3c74f-a26f-4b57-8b15-199f8ea44eba%2CinvolvedObject.name%3Dvalidfoos.company.com%2CinvolvedObject.namespace%3D&limit=500 200 OK in 4 milliseconds (Bquery for foos had limit param query for events had limit param query for foos had user-specified limit param Successful describe foos verbose logs: I0318 12:53:23.620063 39604 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:53:23.625046 39604 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:53:23.631100 39604 round_trippers.go:553] GET https://127.0.0.1:6443/apis/company.com/v1/namespaces/namespace-1679143998-15319/foos?limit=500 200 OK in 2 milliseconds I0318 12:53:23.633593 39604 round_trippers.go:553] GET https://127.0.0.1:6443/apis/company.com/v1/namespaces/namespace-1679143998-15319/foos/test 200 OK in 1 milliseconds I0318 12:53:23.635043 39604 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143998-15319/events?fieldSelector=involvedObject.name%3Dtest%2CinvolvedObject.namespace%3Dnamespace-1679143998-15319%2CinvolvedObject.kind%3DFoo%2CinvolvedObject.uid%3D19b4d9b1-9e6d-4f34-ab10-cfe1c0b392a9&limit=500 200 OK in 1 milliseconds (Bfoo.company.com "test" deleted crd.sh:351: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:53:24.031724 19996 controller.go:624] quota admission added evaluator for: bars.company.com bar.company.com/test created crd.sh:357: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: test: (B+++ [0318 12:53:24] Testing CustomResource watching bar.company.com/test patched bar.company.com/test patched Successful (Bmessage:bar.company.com/test has:bar.company.com/test /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 363: 39683 Killed kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name bar.company.com "test" deleted W0318 12:53:39.241771 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:53:39.241830 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="foos.company.com" W0318 12:53:39.241872 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:53:39.241891 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="resources.mygroup.example.com" W0318 12:53:39.241905 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:53:39.241926 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="validfoos.company.com" W0318 12:53:39.241953 23056 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0318 12:53:39.241971 23056 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="bars.company.com" I0318 12:53:39.242041 23056 shared_informer.go:311] Waiting for caches to sync for resource quota I0318 12:53:39.342720 23056 shared_informer.go:318] Caches are synced for resource quota I0318 12:53:39.561897 23056 shared_informer.go:311] Waiting for caches to sync for garbage collector I0318 12:53:39.561949 23056 shared_informer.go:318] Caches are synced for garbage collector /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 224: 39684 Killed while [ ${tries} -lt 10 ]; do tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1; done crd.sh:389: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bfoo.company.com/test created crd.sh:395: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:398: Successful get foos/test {{.someField}}: field1 (Bfoo.company.com/test unchanged crd.sh:404: Successful get foos/test {{.someField}}: field1 (Bcrd.sh:407: Successful get foos/test {{.nestedField.someSubfield}}: subfield1 (Bfoo.company.com/test configured crd.sh:413: Successful get foos/test {{.nestedField.someSubfield}}: modifiedSubfield (Bcrd.sh:416: Successful get foos/test {{.nestedField.otherSubfield}}: subfield2 (Bfoo.company.com/test configured crd.sh:422: Successful get foos/test {{.nestedField.otherSubfield}}: (Bcrd.sh:425: Successful get foos/test {{.nestedField.newSubfield}}: (Bfoo.company.com/test configured crd.sh:431: Successful get foos/test {{.nestedField.newSubfield}}: subfield3 (Bfoo.company.com "test" deleted crd.sh:437: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bfoo.company.com/test-list created bar.company.com/test-list created crd.sh:443: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test-list: (Bcrd.sh:444: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: test-list: (Bcrd.sh:447: Successful get foos/test-list {{.someField}}: field1 (Bcrd.sh:448: Successful get bars/test-list {{.someField}}: field1 (Bfoo.company.com/test-list unchanged bar.company.com/test-list unchanged crd.sh:454: Successful get foos/test-list {{.someField}}: field1 (Bcrd.sh:455: Successful get bars/test-list {{.someField}}: field1 (Bcrd.sh:458: Successful get foos/test-list {{.someField}}: field1 (Bcrd.sh:459: Successful get bars/test-list {{.someField}}: field1 (Bfoo.company.com/test-list configured bar.company.com/test-list configured crd.sh:465: Successful get foos/test-list {{.someField}}: modifiedField (Bcrd.sh:466: Successful get bars/test-list {{.someField}}: modifiedField (Bcrd.sh:469: Successful get foos/test-list {{.otherField}}: field2 (Bcrd.sh:470: Successful get bars/test-list {{.otherField}}: field2 (Bfoo.company.com/test-list configured bar.company.com/test-list configured crd.sh:476: Successful get foos/test-list {{.otherField}}: (Bcrd.sh:477: Successful get bars/test-list {{.otherField}}: (Bcrd.sh:480: Successful get foos/test-list {{.newField}}: (Bcrd.sh:481: Successful get bars/test-list {{.newField}}: (Bfoo.company.com/test-list configured bar.company.com/test-list configured crd.sh:487: Successful get foos/test-list {{.newField}}: field3 (Bcrd.sh:488: Successful get bars/test-list {{.newField}}: field3 (Bfoo.company.com "test-list" deleted bar.company.com "test-list" deleted crd.sh:494: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:495: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:499: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:500: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (BFlag --prune-whitelist has been deprecated, Use --prune-allowlist instead. Flag --prune-whitelist has been deprecated, Use --prune-allowlist instead. foo.company.com/test created crd.sh:505: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:506: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (BFlag --prune-whitelist has been deprecated, Use --prune-allowlist instead. Flag --prune-whitelist has been deprecated, Use --prune-allowlist instead. bar.company.com/test created foo.company.com/test pruned crd.sh:511: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:512: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: test: (Bbar.company.com "test" deleted crd.sh:518: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:519: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bnamespace/non-native-resources created bar.company.com/test created crd.sh:524: Successful get bars {{len .items}}: 1 (Bnamespace "non-native-resources" deleted crd.sh:527: Successful get bars {{len .items}}: 0 (BError from server (NotFound): namespaces "non-native-resources" not found customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted I0318 12:53:50.510563 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:50.525228 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:50.553781 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted I0318 12:53:50.686135 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:50.719625 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:50.729913 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:50.861701 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted I0318 12:53:50.871192 19996 handler.go:165] Adding GroupVersion mygroup.example.com v1alpha1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted I0318 12:53:51.048201 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager I0318 12:53:51.077872 19996 handler.go:165] Adding GroupVersion company.com v1 to ResourceManager +++ exit code: 0 Recording: run_recursive_resources_tests Running command: run_recursive_resources_tests +++ Running case: test-cmd.run_recursive_resources_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_recursive_resources_tests +++ [0318 12:53:51] Testing recursive resources +++ [0318 12:53:51] Creating namespace namespace-1679144031-18655 namespace/namespace-1679144031-18655 created Context "test" modified. generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BW0318 12:53:51.554234 19996 cacher.go:171] Terminating all watchers from cacher foos.company.com E0318 12:53:51.555650 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:51.731033 19996 cacher.go:171] Terminating all watchers from cacher bars.company.com E0318 12:53:51.732429 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:51.884062 19996 cacher.go:171] Terminating all watchers from cacher resources.mygroup.example.com E0318 12:53:51.885470 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:52.091788 19996 cacher.go:171] Terminating all watchers from cacher validfoos.company.com E0318 12:53:52.093355 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:pod/busybox0 created pod/busybox1 created error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false has:error validating data: kind not set generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox: (BSuccessful (Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BW0318 12:53:52.525501 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:52.525547 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:52.836702 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:52.836745 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:52.904064 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:52.904116 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced: (BSuccessful (Bmessage:pod/busybox0 replaced pod/busybox1 replaced error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false has:error validating data: kind not set generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:Name: busybox0 Namespace: namespace-1679144031-18655 Priority: 0 Node: Labels: app=busybox0 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: Name: busybox1 Namespace: namespace-1679144031-18655 Priority: 0 Node: Labels: app=busybox1 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:app=busybox0 Successful (Bmessage:Name: busybox0 Namespace: namespace-1679144031-18655 Priority: 0 Node: Labels: app=busybox0 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: Name: busybox1 Namespace: namespace-1679144031-18655 Priority: 0 Node: Labels: app=busybox1 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:app=busybox1 Successful (Bmessage:Name: busybox0 Namespace: namespace-1679144031-18655 Priority: 0 Node: Labels: app=busybox0 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: Name: busybox1 Namespace: namespace-1679144031-18655 Priority: 0 Node: Labels: app=busybox1 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue: (BSuccessful (Bmessage:pod/busybox0 annotate pod/busybox1 annotate error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing W0318 12:53:53.554558 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:53.554603 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced: (BSuccessful (Bmessage:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/busybox0 configured Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/busybox1 configured error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false has:error validating data: kind not set generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:busybox0:busybox1: Successful (Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue: (BSuccessful (Bmessage:pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BW0318 12:53:54.427139 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:54.427182 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox: (BSuccessful (Bmessage:pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "busybox0" force deleted pod "busybox1" force deleted error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/busybox0 created I0318 12:53:55.119958 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-pjlcs" replicationcontroller/busybox1 created error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false I0318 12:53:55.167996 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-4wnxz" generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BI0318 12:53:55.292424 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="non-native-resources" generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80 (Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80 (BSuccessful (Bmessage:horizontalpodautoscaler.autoscaling/busybox0 autoscaled horizontalpodautoscaler.autoscaling/busybox1 autoscaled error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing horizontalpodautoscaler.autoscaling "busybox0" deleted W0318 12:53:55.785776 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:55.785814 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource horizontalpodautoscaler.autoscaling "busybox1" deleted W0318 12:53:55.880663 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:55.880702 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:55.891517 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:55.891555 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1 (BI0318 12:53:56.106680 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144031-18655/busybox0" clusterIPs=map[IPv4:10.0.0.177] I0318 12:53:56.184956 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144031-18655/busybox1" clusterIPs=map[IPv4:10.0.0.234] generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (BSuccessful (Bmessage:service/busybox0 exposed service/busybox1 exposed error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1 (BI0318 12:53:56.627732 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-hsjc5" I0318 12:53:56.668443 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-zx6xz" generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2 (Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2 (BSuccessful (Bmessage:replicationcontroller/busybox0 scaled replicationcontroller/busybox1 scaled error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. replicationcontroller "busybox0" force deleted replicationcontroller "busybox1" force deleted error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx1-deployment created I0318 12:53:57.316249 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/nginx1-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-69c599568 to 2" deployment.apps/nginx0-deployment created I0318 12:53:57.360528 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/nginx1-deployment-69c599568" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-69c599568-9zr29" error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false I0318 12:53:57.378584 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/nginx0-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-5944978c6f to 2" I0318 12:53:57.378606 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/nginx1-deployment-69c599568" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-69c599568-ddvzf" I0318 12:53:57.396389 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/nginx0-deployment-5944978c6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-5944978c6f-f4wrg" I0318 12:53:57.413294 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/nginx0-deployment-5944978c6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-5944978c6f-vqqvv" generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment: (Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9: (Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9: (BSuccessful (Bmessage:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1) deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1) error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing deployment.apps/nginx1-deployment paused deployment.apps/nginx0-deployment paused generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true: (BSuccessful (Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing deployment.apps/nginx1-deployment resumed deployment.apps/nginx0-deployment resumed generic-resources.sh:384: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: :: (BSuccessful (Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing W0318 12:53:58.316258 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:58.316297 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Waiting for deployment "nginx1-deployment" rollout to finish Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing W0318 12:53:59.186194 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:59.186236 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:53:59.215296 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:53:59.215337 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Waiting for deployment "nginx0-deployment" rollout to finish Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Waiting for deployment "nginx1-deployment" rollout to finish Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing Successful (Bmessage:deployment.apps/nginx1-deployment REVISION CHANGE-CAUSE 1 deployment.apps/nginx0-deployment REVISION CHANGE-CAUSE 1 error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:nginx0-deployment Successful (Bmessage:deployment.apps/nginx1-deployment REVISION CHANGE-CAUSE 1 deployment.apps/nginx0-deployment REVISION CHANGE-CAUSE 1 error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:nginx1-deployment Successful (Bmessage:deployment.apps/nginx1-deployment REVISION CHANGE-CAUSE 1 deployment.apps/nginx0-deployment REVISION CHANGE-CAUSE 1 error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. deployment.apps "nginx1-deployment" force deleted deployment.apps "nginx0-deployment" force deleted error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' W0318 12:54:02.116162 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:02.116200 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:411: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/busybox0 created I0318 12:54:02.664941 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-csdpb" replicationcontroller/busybox1 created error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false I0318 12:54:02.724005 23056 event.go:307] "Event occurred" object="namespace-1679144031-18655/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-pb5fq" generic-resources.sh:415: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:no rollbacker has been implemented for "ReplicationController" no rollbacker has been implemented for "ReplicationController" unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:no rollbacker has been implemented for "ReplicationController" Successful (Bmessage:no rollbacker has been implemented for "ReplicationController" no rollbacker has been implemented for "ReplicationController" unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" pausing is not supported error: replicationcontrollers "busybox1" pausing is not supported has:Object 'Kind' is missing Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" pausing is not supported error: replicationcontrollers "busybox1" pausing is not supported has:replicationcontrollers "busybox0" pausing is not supported Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" pausing is not supported error: replicationcontrollers "busybox1" pausing is not supported has:replicationcontrollers "busybox1" pausing is not supported Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" resuming is not supported error: replicationcontrollers "busybox1" resuming is not supported has:Object 'Kind' is missing Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" resuming is not supported error: replicationcontrollers "busybox1" resuming is not supported has:replicationcontrollers "busybox0" resuming is not supported Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" resuming is not supported error: replicationcontrollers "busybox1" resuming is not supported has:replicationcontrollers "busybox1" resuming is not supported Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. replicationcontroller "busybox0" force deleted replicationcontroller "busybox1" force deleted error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' +++ exit code: 0 Recording: run_namespace_tests Running command: run_namespace_tests +++ Running case: test-cmd.run_namespace_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_namespace_tests +++ [0318 12:54:04] Testing kubectl(v1:namespaces) Successful (Bmessage:Error from server (NotFound): namespaces "my-namespace" not found has: not found namespace/my-namespace created (dry run) namespace/my-namespace created (server dry run) Successful (Bmessage:Error from server (NotFound): namespaces "my-namespace" not found has: not found namespace/my-namespace created core.sh:1504: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace (Bquery for namespaces had limit param query for resourcequotas had limit param query for limitranges had limit param query for namespaces had user-specified limit param Successful describe namespaces verbose logs: I0318 12:54:04.577602 41949 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:04.582444 41949 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:54:04.590333 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces?limit=500 200 OK in 3 milliseconds I0318 12:54:04.601429 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default 200 OK in 1 milliseconds I0318 12:54:04.603144 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.604601 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.606251 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-node-lease 200 OK in 1 milliseconds I0318 12:54:04.607629 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-node-lease/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.608924 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-node-lease/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.610475 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-public 200 OK in 1 milliseconds I0318 12:54:04.611680 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-public/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.612750 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-public/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.614160 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system 200 OK in 1 milliseconds I0318 12:54:04.615320 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.616455 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.617989 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/my-namespace 200 OK in 1 milliseconds I0318 12:54:04.619196 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/my-namespace/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.620362 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/my-namespace/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.621979 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143811-20240 200 OK in 1 milliseconds I0318 12:54:04.623186 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143811-20240/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.624382 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143811-20240/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.625853 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143811-30125 200 OK in 1 milliseconds I0318 12:54:04.626933 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143811-30125/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.628146 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143811-30125/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.629834 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143812-6002 200 OK in 1 milliseconds I0318 12:54:04.630975 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143812-6002/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.632061 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143812-6002/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.633633 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143814-29476 200 OK in 1 milliseconds I0318 12:54:04.634794 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143814-29476/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.635768 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143814-29476/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.637262 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143817-32530 200 OK in 1 milliseconds I0318 12:54:04.638495 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143817-32530/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.639594 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143817-32530/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.640973 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143823-31568 200 OK in 1 milliseconds I0318 12:54:04.642318 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143823-31568/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.643436 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143823-31568/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.644894 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143826-1679 200 OK in 1 milliseconds I0318 12:54:04.646066 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143826-1679/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.647216 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143826-1679/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.648692 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143829-13516 200 OK in 1 milliseconds I0318 12:54:04.649857 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143829-13516/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.650977 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143829-13516/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.652708 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143834-32323 200 OK in 1 milliseconds I0318 12:54:04.654045 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143834-32323/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.655260 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143834-32323/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.656986 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143837-15778 200 OK in 1 milliseconds I0318 12:54:04.658194 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143837-15778/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.659392 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143837-15778/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.660960 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143838-25271 200 OK in 1 milliseconds I0318 12:54:04.662141 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143838-25271/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.663223 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143838-25271/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.664656 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143839-21860 200 OK in 1 milliseconds I0318 12:54:04.665871 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143839-21860/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.666902 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143839-21860/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.668442 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143849-30637 200 OK in 1 milliseconds I0318 12:54:04.669542 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143849-30637/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.670648 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143849-30637/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.672103 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143850-3110 200 OK in 1 milliseconds I0318 12:54:04.673251 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143850-3110/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.674350 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143850-3110/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.675844 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143862-28451 200 OK in 1 milliseconds I0318 12:54:04.677097 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143862-28451/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.678310 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143862-28451/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.679796 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143863-8287 200 OK in 1 milliseconds I0318 12:54:04.680908 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143863-8287/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.682007 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143863-8287/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.683959 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143865-5175 200 OK in 1 milliseconds I0318 12:54:04.685249 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143865-5175/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.686369 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143865-5175/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.687820 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143866-13326 200 OK in 1 milliseconds I0318 12:54:04.689001 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143866-13326/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.690071 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143866-13326/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.691505 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143866-14796 200 OK in 1 milliseconds I0318 12:54:04.692819 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143866-14796/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.693962 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143866-14796/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.695476 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143869-10982 200 OK in 1 milliseconds I0318 12:54:04.696614 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143869-10982/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.697810 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143869-10982/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.699331 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143869-2145 200 OK in 1 milliseconds I0318 12:54:04.700462 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143869-2145/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.701714 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143869-2145/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.703259 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143919-30781 200 OK in 1 milliseconds I0318 12:54:04.704506 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143919-30781/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.705569 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143919-30781/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.706963 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143927-2947 200 OK in 1 milliseconds I0318 12:54:04.708281 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143927-2947/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.709400 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143927-2947/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.710990 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143928-31748 200 OK in 1 milliseconds I0318 12:54:04.712187 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143928-31748/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.713356 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143928-31748/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.714725 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143929-14049 200 OK in 1 milliseconds I0318 12:54:04.715798 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143929-14049/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.716901 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143929-14049/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.718364 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143944-21585 200 OK in 1 milliseconds I0318 12:54:04.719461 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143944-21585/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.720561 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143944-21585/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.722038 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143964-22608 200 OK in 1 milliseconds I0318 12:54:04.723219 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143964-22608/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.724317 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143964-22608/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.725722 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143964-25511 200 OK in 1 milliseconds I0318 12:54:04.726844 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143964-25511/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.727995 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143964-25511/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.729501 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143973-26352 200 OK in 1 milliseconds I0318 12:54:04.730590 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143973-26352/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.731695 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143973-26352/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.733100 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143980-188 200 OK in 1 milliseconds I0318 12:54:04.734191 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143980-188/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.735333 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143980-188/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.736946 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143987-30554 200 OK in 1 milliseconds I0318 12:54:04.738107 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143987-30554/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.739266 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143987-30554/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.740681 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143988-27947 200 OK in 1 milliseconds I0318 12:54:04.741799 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143988-27947/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.742840 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143988-27947/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.744228 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143991-22601 200 OK in 1 milliseconds I0318 12:54:04.745381 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143991-22601/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.746470 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143991-22601/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.747924 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143994-6980 200 OK in 1 milliseconds I0318 12:54:04.749048 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143994-6980/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.750145 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143994-6980/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.751639 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143994-7385 200 OK in 1 milliseconds I0318 12:54:04.752764 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143994-7385/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.753923 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143994-7385/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.755229 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143995-4864 200 OK in 0 milliseconds I0318 12:54:04.756366 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143995-4864/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.757530 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143995-4864/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.758926 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143997-5909 200 OK in 1 milliseconds I0318 12:54:04.760064 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143997-5909/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.761197 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143997-5909/limitranges?limit=500 200 OK in 0 milliseconds I0318 12:54:04.762654 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143998-15319 200 OK in 1 milliseconds I0318 12:54:04.763703 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143998-15319/resourcequotas?limit=500 200 OK in 0 milliseconds I0318 12:54:04.764826 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679143998-15319/limitranges?limit=500 200 OK in 1 milliseconds I0318 12:54:04.766221 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144031-18655 200 OK in 1 milliseconds I0318 12:54:04.767486 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144031-18655/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:04.768553 41949 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144031-18655/limitranges?limit=500 200 OK in 0 milliseconds (Bnamespace "my-namespace" deleted W0318 12:54:06.969958 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:06.970007 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:54:08.220447 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:08.220488 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0318 12:54:09.346575 23056 shared_informer.go:311] Waiting for caches to sync for resource quota I0318 12:54:09.346626 23056 shared_informer.go:318] Caches are synced for resource quota I0318 12:54:09.565870 23056 shared_informer.go:311] Waiting for caches to sync for garbage collector I0318 12:54:09.565917 23056 shared_informer.go:318] Caches are synced for garbage collector W0318 12:54:09.921653 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:09.921704 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource namespace/my-namespace condition met Successful (Bmessage:Error from server (NotFound): namespaces "my-namespace" not found has: not found namespace/my-namespace created core.sh:1515: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace (BI0318 12:54:10.505083 23056 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1679144031-18655/busybox0" I0318 12:54:10.532232 23056 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1679144031-18655/busybox1" Successful (Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace namespace "kube-node-lease" deleted namespace "my-namespace" deleted namespace "namespace-1679143811-20240" deleted namespace "namespace-1679143811-30125" deleted namespace "namespace-1679143812-6002" deleted namespace "namespace-1679143814-29476" deleted namespace "namespace-1679143817-32530" deleted namespace "namespace-1679143823-31568" deleted namespace "namespace-1679143826-1679" deleted namespace "namespace-1679143829-13516" deleted namespace "namespace-1679143834-32323" deleted namespace "namespace-1679143837-15778" deleted namespace "namespace-1679143838-25271" deleted namespace "namespace-1679143839-21860" deleted namespace "namespace-1679143849-30637" deleted namespace "namespace-1679143850-3110" deleted namespace "namespace-1679143862-28451" deleted namespace "namespace-1679143863-8287" deleted namespace "namespace-1679143865-5175" deleted namespace "namespace-1679143866-13326" deleted namespace "namespace-1679143866-14796" deleted namespace "namespace-1679143869-10982" deleted namespace "namespace-1679143869-2145" deleted namespace "namespace-1679143919-30781" deleted namespace "namespace-1679143927-2947" deleted namespace "namespace-1679143928-31748" deleted namespace "namespace-1679143929-14049" deleted namespace "namespace-1679143944-21585" deleted namespace "namespace-1679143964-22608" deleted namespace "namespace-1679143964-25511" deleted namespace "namespace-1679143973-26352" deleted namespace "namespace-1679143980-188" deleted namespace "namespace-1679143987-30554" deleted namespace "namespace-1679143988-27947" deleted namespace "namespace-1679143991-22601" deleted namespace "namespace-1679143994-6980" deleted namespace "namespace-1679143994-7385" deleted namespace "namespace-1679143995-4864" deleted namespace "namespace-1679143997-5909" deleted namespace "namespace-1679143998-15319" deleted namespace "namespace-1679144031-18655" deleted Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted has:Warning: deleting cluster-scoped resources Successful (Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace namespace "kube-node-lease" deleted namespace "my-namespace" deleted namespace "namespace-1679143811-20240" deleted namespace "namespace-1679143811-30125" deleted namespace "namespace-1679143812-6002" deleted namespace "namespace-1679143814-29476" deleted namespace "namespace-1679143817-32530" deleted namespace "namespace-1679143823-31568" deleted namespace "namespace-1679143826-1679" deleted namespace "namespace-1679143829-13516" deleted namespace "namespace-1679143834-32323" deleted namespace "namespace-1679143837-15778" deleted namespace "namespace-1679143838-25271" deleted namespace "namespace-1679143839-21860" deleted namespace "namespace-1679143849-30637" deleted namespace "namespace-1679143850-3110" deleted namespace "namespace-1679143862-28451" deleted namespace "namespace-1679143863-8287" deleted namespace "namespace-1679143865-5175" deleted namespace "namespace-1679143866-13326" deleted namespace "namespace-1679143866-14796" deleted namespace "namespace-1679143869-10982" deleted namespace "namespace-1679143869-2145" deleted namespace "namespace-1679143919-30781" deleted namespace "namespace-1679143927-2947" deleted namespace "namespace-1679143928-31748" deleted namespace "namespace-1679143929-14049" deleted namespace "namespace-1679143944-21585" deleted namespace "namespace-1679143964-22608" deleted namespace "namespace-1679143964-25511" deleted namespace "namespace-1679143973-26352" deleted namespace "namespace-1679143980-188" deleted namespace "namespace-1679143987-30554" deleted namespace "namespace-1679143988-27947" deleted namespace "namespace-1679143991-22601" deleted namespace "namespace-1679143994-6980" deleted namespace "namespace-1679143994-7385" deleted namespace "namespace-1679143995-4864" deleted namespace "namespace-1679143997-5909" deleted namespace "namespace-1679143998-15319" deleted namespace "namespace-1679144031-18655" deleted Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted has:namespace "my-namespace" deleted namespace/quotas created core.sh:1522: Successful get namespaces/quotas {{.metadata.name}}: quotas (Bcore.sh:1523: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name "test-quota" }}found{{end}}{{end}}:: : (Bresourcequota/test-quota created (dry run) resourcequota/test-quota created (server dry run) core.sh:1527: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name "test-quota" }}found{{end}}{{end}}:: : (Bresourcequota/test-quota created core.sh:1530: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name "test-quota" }}found{{end}}{{end}}:: found: (Bquery for resourcequotas had limit param query for resourcequotas had user-specified limit param Successful describe resourcequotas verbose logs: I0318 12:54:11.521281 42150 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:11.526744 42150 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0318 12:54:11.533101 42150 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas?limit=500 200 OK in 1 milliseconds I0318 12:54:11.535252 42150 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds (Bresourcequota "test-quota" deleted I0318 12:54:11.673900 23056 resource_quota_controller.go:337] "Resource quota has been deleted" key="quotas/test-quota" namespace "quotas" deleted W0318 12:54:12.521951 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:12.521994 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource core.sh:1544: Successful get namespaces {{range.items}}{{ if eq .metadata.name "other" }}found{{end}}{{end}}:: : (Bnamespace/other created core.sh:1548: Successful get namespaces/other {{.metadata.name}}: other (Bcore.sh:1552: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:1556: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:1558: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:error: a resource cannot be retrieved by name across all namespaces has:a resource cannot be retrieved by name across all namespaces core.sh:1565: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:1569: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: (Bnamespace "other" deleted I0318 12:54:20.333858 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="my-namespace" I0318 12:54:20.749443 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="kube-node-lease" I0318 12:54:20.896722 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143811-20240" I0318 12:54:20.970126 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143811-30125" I0318 12:54:20.984933 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143826-1679" I0318 12:54:21.000901 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143817-32530" I0318 12:54:21.016138 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143814-29476" I0318 12:54:21.045744 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143812-6002" I0318 12:54:21.106328 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143823-31568" I0318 12:54:21.117917 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143829-13516" I0318 12:54:21.155294 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143834-32323" I0318 12:54:21.288123 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143837-15778" I0318 12:54:21.370952 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143838-25271" I0318 12:54:21.447591 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143839-21860" I0318 12:54:21.447645 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143849-30637" I0318 12:54:21.461899 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143862-28451" I0318 12:54:21.488608 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143863-8287" I0318 12:54:21.495799 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143850-3110" I0318 12:54:21.511031 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143865-5175" I0318 12:54:21.530247 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143866-13326" I0318 12:54:21.705602 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143866-14796" I0318 12:54:21.845885 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143869-10982" I0318 12:54:22.128006 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143927-2947" I0318 12:54:22.138613 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143919-30781" I0318 12:54:22.138643 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143944-21585" I0318 12:54:22.175195 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143964-22608" I0318 12:54:22.184392 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143928-31748" I0318 12:54:22.243047 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143964-25511" I0318 12:54:22.256609 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143869-2145" I0318 12:54:22.353072 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143973-26352" I0318 12:54:22.377429 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143929-14049" I0318 12:54:22.473452 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143980-188" I0318 12:54:22.770186 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143987-30554" I0318 12:54:22.828248 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143994-6980" I0318 12:54:22.872648 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143994-7385" I0318 12:54:22.872727 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143988-27947" I0318 12:54:22.902270 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143997-5909" I0318 12:54:22.963483 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143995-4864" I0318 12:54:23.010929 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143998-15319" I0318 12:54:23.049474 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679143991-22601" I0318 12:54:23.061000 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="quotas" I0318 12:54:23.127317 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="namespace-1679144031-18655" +++ exit code: 0 Recording: run_secrets_test Running command: run_secrets_test +++ Running case: test-cmd.run_secrets_test +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_secrets_test +++ [0318 12:54:24] Creating namespace namespace-1679144064-29468 namespace/namespace-1679144064-29468 created W0318 12:54:24.355324 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:24.355366 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Context "test" modified. +++ [0318 12:54:24] Testing secrets I0318 12:54:24.448807 42448 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has:kind: Secret Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has:apiVersion: v1 Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has:key1: dmFsdWUx Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has not:example.com core.sh:831: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-secrets" }}found{{end}}{{end}}:: : (Bnamespace/test-secrets created core.sh:835: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets (Bcore.sh:839: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:843: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:844: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type (Bquery for secrets had limit param query for secrets had user-specified limit param Successful describe secrets verbose logs: I0318 12:54:25.044333 42574 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:25.049014 42574 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:54:25.054109 42574 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-secrets/secrets?limit=500 200 OK in 1 milliseconds I0318 12:54:25.056948 42574 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-secrets/secrets/test-secret 200 OK in 1 milliseconds (Bsecret "test-secret" deleted core.sh:856: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:860: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:861: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson (Bsecret "test-secret" deleted core.sh:871: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:875: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:876: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson (Bsecret "test-secret" deleted core.sh:886: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:889: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:890: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls (Bsecret "test-secret" deleted secret/test-secret created core.sh:896: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:897: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls (Bsecret "test-secret" deleted secret/secret-string-data created core.sh:919: Successful get secret/secret-string-data --namespace=test-secrets {{.data}}: map[k1:djE= k2:djI=] (Bcore.sh:920: Successful get secret/secret-string-data --namespace=test-secrets {{.data}}: map[k1:djE= k2:djI=] (Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets {{.stringData}}: (Bsecret "secret-string-data" deleted core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (BW0318 12:54:27.251670 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:27.251705 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource secret "test-secret" deleted namespace "test-secrets" deleted W0318 12:54:28.129604 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:28.129642 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0318 12:54:29.184748 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="other" +++ exit code: 0 Recording: run_configmap_tests Running command: run_configmap_tests +++ Running case: test-cmd.run_configmap_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_configmap_tests +++ [0318 12:54:32] Creating namespace namespace-1679144072-4326 namespace/namespace-1679144072-4326 created Context "test" modified. +++ [0318 12:54:32] Testing configmaps configmap/test-configmap created core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap (Bconfigmap "test-configmap" deleted core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-configmaps" }}found{{end}}{{end}}:: : (Bnamespace/test-configmaps created core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name "test-configmap" }}found{{end}}{{end}}:: : (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name "test-binary-configmap" }}found{{end}}{{end}}:: : (Bconfigmap/test-configmap created (dry run) configmap/test-configmap created (server dry run) core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name "test-configmap" }}found{{end}}{{end}}:: : (Bconfigmap/test-configmap created configmap/test-binary-configmap created core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap (Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap (Bquery for configmaps had limit param query for events had limit param query for configmaps had user-specified limit param Successful describe configmaps verbose logs: I0318 12:54:34.061805 43317 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:34.067902 43317 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0318 12:54:34.074345 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps?limit=500 200 OK in 1 milliseconds I0318 12:54:34.076586 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/kube-root-ca.crt 200 OK in 1 milliseconds I0318 12:54:34.078044 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.name%3Dkube-root-ca.crt%2CinvolvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3Dd2055ed5-f84e-43e0-99a8-30f256f8b70c&limit=500 200 OK in 1 milliseconds I0318 12:54:34.079610 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-binary-configmap 200 OK in 1 milliseconds I0318 12:54:34.080983 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D331478e7-14da-44f5-81c6-4e03dcf8da36%2CinvolvedObject.name%3Dtest-binary-configmap&limit=500 200 OK in 1 milliseconds I0318 12:54:34.082391 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-configmap 200 OK in 1 milliseconds I0318 12:54:34.083734 43317 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D5209ddb7-519f-4deb-ad55-9fe187f1271b%2CinvolvedObject.name%3Dtest-configmap%2CinvolvedObject.namespace%3Dtest-configmaps&limit=500 200 OK in 1 milliseconds (Bconfigmap "test-configmap" deleted configmap "test-binary-configmap" deleted namespace "test-configmaps" deleted W0318 12:54:35.741625 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:35.741665 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0318 12:54:37.557684 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="test-secrets" +++ exit code: 0 Recording: run_client_config_tests Running command: run_client_config_tests +++ Running case: test-cmd.run_client_config_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_client_config_tests +++ [0318 12:54:39] Creating namespace namespace-1679144079-7668 namespace/namespace-1679144079-7668 created Context "test" modified. +++ [0318 12:54:39] Testing client config Successful (Bmessage:error: stat missing: no such file or directory has:missing: no such file or directory Successful (Bmessage:error: stat missing: no such file or directory has:missing: no such file or directory Successful (Bmessage:error: stat missing: no such file or directory has:missing: no such file or directory Successful (Bmessage:Error in configuration: context was not found for specified context: missing-context has:context was not found for specified context: missing-context Successful (Bmessage:error: no server found for cluster "missing-cluster" has:no server found for cluster "missing-cluster" Successful (Bmessage:error: auth info "missing-user" does not exist has:auth info "missing-user" does not exist Successful (Bmessage:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50" has:error loading config file Successful (Bmessage:error: stat missing-config: no such file or directory has:no such file or directory +++ exit code: 0 Recording: run_service_accounts_tests Running command: run_service_accounts_tests +++ Running case: test-cmd.run_service_accounts_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_service_accounts_tests +++ [0318 12:54:40] Creating namespace namespace-1679144080-10414 namespace/namespace-1679144080-10414 created Context "test" modified. +++ [0318 12:54:40] Testing service accounts core.sh:951: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-service-accounts" }}found{{end}}{{end}}:: : (Bnamespace/test-service-accounts created core.sh:955: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts (Bcore.sh:959: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name "test-service-account" }}found{{end}}{{end}}:: : (Bserviceaccount/test-service-account created (dry run) serviceaccount/test-service-account created (server dry run) core.sh:963: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name "test-service-account" }}found{{end}}{{end}}:: : (Bserviceaccount/test-service-account created core.sh:967: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account (Bquery for serviceaccounts had limit param query for secrets had limit param query for events had limit param query for serviceaccounts had user-specified limit param Successful describe serviceaccounts verbose logs: I0318 12:54:40.944864 43762 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:40.949448 43762 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:54:40.954848 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts?limit=500 200 OK in 1 milliseconds I0318 12:54:40.958061 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts/default 200 OK in 1 milliseconds I0318 12:54:40.959618 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/secrets?limit=500 200 OK in 1 milliseconds I0318 12:54:40.961269 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.name%3Ddefault%2CinvolvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount%2CinvolvedObject.uid%3Dcd260628-9359-4232-9f16-a152f0e3e7e2&limit=500 200 OK in 1 milliseconds I0318 12:54:40.962993 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts/test-service-account 200 OK in 1 milliseconds I0318 12:54:40.964185 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/secrets?limit=500 200 OK in 1 milliseconds I0318 12:54:40.965408 43762 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.name%3Dtest-service-account%2CinvolvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount%2CinvolvedObject.uid%3D4183a333-7f45-42c5-9d9d-c38e35e18f87&limit=500 200 OK in 1 milliseconds (Bserviceaccount "test-service-account" deleted namespace "test-service-accounts" deleted I0318 12:54:44.488399 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="test-configmaps" +++ exit code: 0 Recording: run_job_tests Running command: run_job_tests +++ Running case: test-cmd.run_job_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_job_tests +++ [0318 12:54:46] Creating namespace namespace-1679144086-22328 namespace/namespace-1679144086-22328 created Context "test" modified. +++ [0318 12:54:46] Testing job batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-jobs" }}found{{end}}{{end}}:: : (Bnamespace/test-jobs created batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs (Bbatch.sh:37: Successful get cronjob --namespace=test-jobs {{range.items}}{{ if eq .metadata.name "pi" }}found{{end}}{{end}}:: : (Bcronjob.batch/pi created (dry run) cronjob.batch/pi created (server dry run) batch.sh:41: Successful get cronjob {{range.items}}{{ if eq .metadata.name "pi" }}found{{end}}{{end}}:: : (BI0318 12:54:46.990265 23056 event.go:307] "Event occurred" object="test-jobs/pi" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Warning" reason="InvalidSchedule" message="invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second" cronjob.batch/pi created batch.sh:45: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi (BNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE pi 59 23 31 2 * False 0 1s Name: pi Namespace: test-jobs Labels: Annotations: Schedule: 59 23 31 2 * Concurrency Policy: Allow Suspend: False Successful Job History Limit: 3 Failed Job History Limit: 1 Starting Deadline Seconds: Selector: Parallelism: Completions: Pod Template: Labels: Containers: pi: Image: registry.k8s.io/perl Port: Host Port: Command: perl -Mbignum=bpi -wle print bpi(20) -s https://127.0.0.1:6443 --insecure-skip-tls-verify --match-server-version Environment: Mounts: Volumes: Last Schedule Time: Active Jobs: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning InvalidSchedule 1s cronjob-controller invalid schedule: 59 23 31 2 * : time difference between two schedules is less than 1 second query for cronjobs had limit param query for events had limit param query for cronjobs had user-specified limit param Successful describe cronjobs verbose logs: I0318 12:54:47.223098 44032 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:47.227944 44032 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:54:47.233752 44032 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/cronjobs?limit=500 200 OK in 2 milliseconds I0318 12:54:47.236282 44032 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/cronjobs/pi 200 OK in 1 milliseconds I0318 12:54:47.239611 44032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-jobs/events?fieldSelector=involvedObject.name%3Dpi%2CinvolvedObject.namespace%3Dtest-jobs%2CinvolvedObject.kind%3DCronJob%2CinvolvedObject.uid%3Dcbfda2ed-9aeb-4fd8-9a8e-978fbc31c8ec&limit=500 200 OK in 1 milliseconds (BW0318 12:54:47.355654 44058 helpers.go:706] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client. Successful (Bmessage:job.batch/test-job has:job.batch/test-job batch.sh:56: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: (Bbatch.sh:59: Successful get job --namespace=test-jobs {{range.items}}{{ if eq .metadata.name "test-jobs" }}found{{end}}{{end}}:: : (Bjob.batch/test-job created (dry run) I0318 12:54:47.618620 19996 controller.go:624] quota admission added evaluator for: jobs.batch job.batch/test-job created (server dry run) batch.sh:63: Successful get job --namespace=test-jobs {{range.items}}{{ if eq .metadata.name "test-jobs" }}found{{end}}{{end}}:: : (Bjob.batch/test-job created I0318 12:54:47.766283 23056 job_controller.go:523] enqueueing job test-jobs/test-job I0318 12:54:47.781079 23056 job_controller.go:523] enqueueing job test-jobs/test-job I0318 12:54:47.781078 23056 event.go:307] "Event occurred" object="test-jobs/test-job" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-rfjrj" I0318 12:54:47.799564 23056 job_controller.go:523] enqueueing job test-jobs/test-job batch.sh:67: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job (BNAME COMPLETIONS DURATION AGE test-job 0/1 0s 0s Name: test-job Namespace: test-jobs Selector: batch.kubernetes.io/controller-uid=a2a8b6bf-b0d4-4a81-af41-172bfdc1e23e Labels: batch.kubernetes.io/controller-uid=a2a8b6bf-b0d4-4a81-af41-172bfdc1e23e batch.kubernetes.io/job-name=test-job controller-uid=a2a8b6bf-b0d4-4a81-af41-172bfdc1e23e job-name=test-job Annotations: batch.kubernetes.io/job-tracking: cronjob.kubernetes.io/instantiate: manual Parallelism: 1 Completions: 1 Completion Mode: NonIndexed Start Time: Sat, 18 Mar 2023 12:54:47 +0000 Pods Statuses: 1 Active (0 Ready) / 0 Succeeded / 0 Failed Pod Template: Labels: batch.kubernetes.io/controller-uid=a2a8b6bf-b0d4-4a81-af41-172bfdc1e23e batch.kubernetes.io/job-name=test-job controller-uid=a2a8b6bf-b0d4-4a81-af41-172bfdc1e23e job-name=test-job Containers: pi: Image: registry.k8s.io/perl Port: Host Port: Command: perl -Mbignum=bpi -wle print bpi(20) -s https://127.0.0.1:6443 --insecure-skip-tls-verify --match-server-version Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s job-controller Created pod: test-job-rfjrj query for jobs had limit param query for events had limit param query for jobs had user-specified limit param Successful describe jobs verbose logs: I0318 12:54:48.019975 44184 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:48.024842 44184 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:54:48.030067 44184 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/jobs?limit=500 200 OK in 1 milliseconds I0318 12:54:48.033460 44184 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/jobs/test-job 200 OK in 1 milliseconds I0318 12:54:48.036635 44184 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-jobs/events?fieldSelector=involvedObject.kind%3DJob%2CinvolvedObject.uid%3Da2a8b6bf-b0d4-4a81-af41-172bfdc1e23e%2CinvolvedObject.name%3Dtest-job%2CinvolvedObject.namespace%3Dtest-jobs&limit=500 200 OK in 1 milliseconds (BI0318 12:54:48.181774 23056 job_controller.go:523] enqueueing job test-jobs/test-job job.batch "test-job" deleted cronjob.batch "pi" deleted namespace "test-jobs" deleted I0318 12:54:51.283901 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="test-service-accounts" W0318 12:54:52.055245 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:54:52.055284 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource +++ exit code: 0 Recording: run_create_job_tests Running command: run_create_job_tests +++ Running case: test-cmd.run_create_job_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_create_job_tests +++ [0318 12:54:53] Creating namespace namespace-1679144093-32331 namespace/namespace-1679144093-32331 created Context "test" modified. I0318 12:54:53.780914 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job job.batch/test-job created I0318 12:54:53.795080 23056 event.go:307] "Event occurred" object="namespace-1679144093-32331/test-job" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-bg8zt" I0318 12:54:53.795162 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job I0318 12:54:53.831967 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job create.sh:94: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: registry.k8s.io/nginx:test-cmd (Bjob.batch "test-job" deleted I0318 12:54:53.930894 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job I0318 12:54:54.027309 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job-pi job.batch/test-job-pi created I0318 12:54:54.038340 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job-pi I0318 12:54:54.038377 23056 event.go:307] "Event occurred" object="namespace-1679144093-32331/test-job-pi" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-pi-kljwl" I0318 12:54:54.068645 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job-pi create.sh:100: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: registry.k8s.io/perl (Bjob.batch "test-job-pi" deleted I0318 12:54:54.166075 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/test-job-pi cronjob.batch/test-pi created job.batch/my-pi created I0318 12:54:54.322880 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/my-pi I0318 12:54:54.334133 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/my-pi I0318 12:54:54.334165 23056 event.go:307] "Event occurred" object="namespace-1679144093-32331/my-pi" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-pi-4m82f" I0318 12:54:54.351988 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/my-pi Successful (Bmessage:[perl -Mbignum=bpi -wle print bpi(10)] has:perl -Mbignum=bpi -wle print bpi(10) job.batch "my-pi" deleted I0318 12:54:54.456832 23056 job_controller.go:523] enqueueing job namespace-1679144093-32331/my-pi cronjob.batch "test-pi" deleted +++ exit code: 0 Recording: run_pod_templates_tests Running command: run_pod_templates_tests +++ Running case: test-cmd.run_pod_templates_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_pod_templates_tests +++ [0318 12:54:54] Creating namespace namespace-1679144094-25199 namespace/namespace-1679144094-25199 created Context "test" modified. +++ [0318 12:54:54] Testing pod templates core.sh:1631: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:54:54.996499 19996 controller.go:624] quota admission added evaluator for: podtemplates podtemplate/nginx created core.sh:1635: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx: (BNAME CONTAINERS IMAGES POD LABELS nginx nginx nginx name=nginx core.sh:1643: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx: (Bquery for podtemplates had limit param query for events had limit param query for podtemplates had user-specified limit param Successful describe podtemplates verbose logs: I0318 12:54:55.308469 44613 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:55.313127 44613 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:54:55.318711 44613 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144094-25199/podtemplates?limit=500 200 OK in 1 milliseconds I0318 12:54:55.320947 44613 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144094-25199/podtemplates/nginx 200 OK in 1 milliseconds I0318 12:54:55.322503 44613 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144094-25199/events?fieldSelector=involvedObject.name%3Dnginx%2CinvolvedObject.namespace%3Dnamespace-1679144094-25199%2CinvolvedObject.kind%3DPodTemplate%2CinvolvedObject.uid%3D5bc25224-03b5-4a74-84a5-3b8a618f80a7&limit=500 200 OK in 1 milliseconds (Bpodtemplate "nginx" deleted core.sh:1649: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_service_tests Running command: run_service_tests +++ Running case: test-cmd.run_service_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_service_tests Context "test" modified. +++ [0318 12:54:55] Testing kubectl(v1:services) core.sh:989: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0318 12:54:55.955383 19996 alloc.go:330] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.139] service/redis-master created core.sh:993: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (Bmatched Name: matched Labels: matched Selector: matched IP: matched Port: matched Endpoints: matched Session Affinity: core.sh:995: Successful describe services redis-master: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (Bcore.sh:997: Successful describe Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (B core.sh:999: Successful describe Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None (B core.sh:1001: Successful describe Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (B matched Name: matched Labels: matched Selector: matched IP: matched Port: matched Endpoints: matched Session Affinity: Successful describe services: Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.33.29.5:6443 Session Affinity: None Events: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (BSuccessful describe Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.33.29.5:6443 Session Affinity: None Events: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (BSuccessful describe Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.33.29.5:6443 Session Affinity: None Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None (BSuccessful describe Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.33.29.5:6443 Session Affinity: None Events: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.139 IPs: 10.0.0.139 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (Bquery for services had limit param query for events had limit param query for services had user-specified limit param Successful describe services verbose logs: I0318 12:54:56.668287 44900 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:54:56.674441 44900 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0318 12:54:56.679726 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services?limit=500 200 OK in 1 milliseconds I0318 12:54:56.682566 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services/kubernetes 200 OK in 1 milliseconds I0318 12:54:56.684243 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes 200 OK in 1 milliseconds I0318 12:54:56.685820 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/events?fieldSelector=involvedObject.name%3Dkubernetes%2CinvolvedObject.namespace%3Ddefault%2CinvolvedObject.kind%3DService%2CinvolvedObject.uid%3Db129db99-8611-4dbe-af79-f98ce1739835&limit=500 200 OK in 1 milliseconds I0318 12:54:56.688626 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services/redis-master 200 OK in 1 milliseconds I0318 12:54:56.689986 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/redis-master 200 OK in 1 milliseconds I0318 12:54:56.691363 44900 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/events?fieldSelector=involvedObject.name%3Dredis-master%2CinvolvedObject.namespace%3Ddefault%2CinvolvedObject.kind%3DService%2CinvolvedObject.uid%3D81f34d28-05f9-4db1-ab37-7956ac1aeb16&limit=500 200 OK in 1 milliseconds (Bcore.sh:1015: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend: (BapiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: redis role: master tier: backend name: redis-master spec: ports: - port: 6379 targetPort: 6379 selector: role: padawan status: loadBalancer: {} apiVersion: v1 kind: Service metadata: creationTimestamp: "2023-03-18T12:54:55Z" labels: app: redis role: master tier: backend name: redis-master namespace: default resourceVersion: "2127" uid: 81f34d28-05f9-4db1-ab37-7956ac1aeb16 spec: clusterIP: 10.0.0.139 clusterIPs: - 10.0.0.139 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 6379 protocol: TCP targetPort: 6379 selector: role: padawan sessionAffinity: None type: ClusterIP status: loadBalancer: {} service/redis-master selector updated core.sh:1023: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan: (Bservice/redis-master selector updated core.sh:1027: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend: (BapiVersion: v1 kind: Service metadata: creationTimestamp: "2023-03-18T12:54:55Z" labels: app: redis role: master tier: backend name: redis-master namespace: default resourceVersion: "2131" uid: 81f34d28-05f9-4db1-ab37-7956ac1aeb16 spec: clusterIP: 10.0.0.139 clusterIPs: - 10.0.0.139 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 6379 protocol: TCP targetPort: 6379 selector: role: padawan sessionAffinity: None type: ClusterIP status: loadBalancer: {} apiVersion: v1 kind: Service metadata: creationTimestamp: "2023-03-18T12:54:55Z" labels: app: redis role: master tier: backend name: redis-master namespace: default resourceVersion: "2131" uid: 81f34d28-05f9-4db1-ab37-7956ac1aeb16 spec: clusterIP: 10.0.0.139 clusterIPs: - 10.0.0.139 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 6379 protocol: TCP targetPort: 6379 selector: role: padawan sessionAffinity: None type: ClusterIP status: loadBalancer: {} Successful (Bmessage:kubectl-create kubectl-set has:kubectl-set error: you must specify resources by --filename when --local is set. Example resource specifications include: '-f rsrc.yaml' '--filename=rsrc.json' core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend: (Bservice/redis-master selector updated Successful (Bmessage:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again has:Conflict core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (Bservice "redis-master" deleted core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0318 12:54:58.352486 19996 alloc.go:330] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.139] service/redis-master created core.sh:1062: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (Bcore.sh:1066: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (BI0318 12:54:58.496167 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="test-jobs" I0318 12:54:58.723816 19996 alloc.go:330] "allocated clusterIPs" service="default/service-v1-test" clusterIPs=map[IPv4:10.0.0.223] service/service-v1-test created core.sh:1087: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test: (Bservice/service-v1-test replaced core.sh:1094: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test: (Bservice "redis-master" deleted service "service-v1-test" deleted core.sh:1102: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1106: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0318 12:54:59.624926 19996 alloc.go:330] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.217] service/redis-master created I0318 12:54:59.885581 19996 alloc.go:330] "allocated clusterIPs" service="default/redis-slave" clusterIPs=map[IPv4:10.0.0.172] service/redis-slave created core.sh:1111: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave: (BSuccessful (Bmessage:NAME RSRC kubernetes 191 redis-master 2153 redis-slave 2157 has:redis-master core.sh:1121: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave: (Bservice "redis-master" deleted service "redis-slave" deleted core.sh:1128: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1132: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bservice/beep-boop created (dry run) service/beep-boop created (server dry run) core.sh:1136: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bservice/beep-boop created core.sh:1140: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes: (Bcore.sh:1144: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes: (Bservice "beep-boop" deleted core.sh:1151: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1157: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bservice/testmetadata created (dry run) pod/testmetadata created (dry run) service/testmetadata created (server dry run) pod/testmetadata created (server dry run) core.sh:1162: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0318 12:55:01.239162 19996 alloc.go:330] "allocated clusterIPs" service="default/testmetadata" clusterIPs=map[IPv4:10.0.0.151] service/testmetadata created pod/testmetadata created core.sh:1166: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: testmetadata: (Bcore.sh:1167: Successful get service testmetadata {{(index .spec.ports 0).port}}: 80 (BSuccessful (Bmessage:kubectl-run has:kubectl-run I0318 12:55:01.538729 19996 alloc.go:330] "allocated clusterIPs" service="default/exposemetadata" clusterIPs=map[IPv4:10.0.0.246] service/exposemetadata exposed core.sh:1176: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work] (BSuccessful (Bmessage:kubectl-expose has:kubectl-expose service "exposemetadata" deleted service "testmetadata" deleted pod "testmetadata" deleted +++ exit code: 0 Recording: run_daemonset_tests Running command: run_daemonset_tests +++ Running case: test-cmd.run_daemonset_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_daemonset_tests +++ [0318 12:55:01] Creating namespace namespace-1679144101-12024 namespace/namespace-1679144101-12024 created Context "test" modified. +++ [0318 12:55:02] Testing kubectl(v1:daemonsets) apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:02.343151 19996 controller.go:624] quota admission added evaluator for: daemonsets.apps daemonset.apps/bind created I0318 12:55:02.364074 19996 controller.go:624] quota admission added evaluator for: controllerrevisions.apps apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1 (Bdaemonset.apps/bind configured apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1 (Bdaemonset.apps/bind image updated apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2 (Bdaemonset.apps/bind env updated apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3 (Bdaemonset.apps/bind resource requirements updated apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4 (BSuccessful (Bmessage:kubectl-client-side-apply kube-controller-manager kubectl-set has:kubectl-set query for daemonsets had limit param query for pods had limit param query for events had limit param query for daemonsets had user-specified limit param Successful describe daemonsets verbose logs: I0318 12:55:03.244490 45974 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:55:03.249052 45974 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:55:03.254843 45974 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144101-12024/daemonsets?limit=500 200 OK in 1 milliseconds I0318 12:55:03.257256 45974 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144101-12024/daemonsets/bind 200 OK in 1 milliseconds I0318 12:55:03.261677 45974 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144101-12024/pods?labelSelector=service%3Dbind&limit=500 200 OK in 1 milliseconds I0318 12:55:03.263344 45974 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144101-12024/events?fieldSelector=involvedObject.name%3Dbind%2CinvolvedObject.namespace%3Dnamespace-1679144101-12024%2CinvolvedObject.kind%3DDaemonSet%2CinvolvedObject.uid%3Df9945a78-654e-4e7c-bbdb-9e7080b2b1c0&limit=500 200 OK in 1 milliseconds (Bdaemonset.apps/bind restarted apps.sh:53: Successful get daemonsets bind {{.metadata.generation}}: 5 (Bdaemonset.apps "bind" deleted +++ exit code: 0 Recording: run_daemonset_history_tests Running command: run_daemonset_history_tests +++ Running case: test-cmd.run_daemonset_history_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_daemonset_history_tests +++ [0318 12:55:03] Creating namespace namespace-1679144103-10756 namespace/namespace-1679144103-10756 created Context "test" modified. +++ [0318 12:55:03] Testing kubectl(v1:daemonsets, v1:controllerrevisions) apps.sh:71: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: (BFlag --record has been deprecated, --record will be removed in the future daemonset.apps/bind created apps.sh:75: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1679144103-10756"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"registry.k8s.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1) apps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0: (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BFlag --record has been deprecated, --record will be removed in the future daemonset.apps/bind configured apps.sh:82: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest: (Bapps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:84: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bapps.sh:85: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1679144103-10756"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"registry.k8s.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1679144103-10756"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"registry.k8s.io/pause:latest","name":"kubernetes-pause"},{"image":"registry.k8s.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (BSuccessful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:daemonset.apps/bind Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:REVISION CHANGE-CAUSE Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:1 kubectl apply Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:2 kubectl apply Successful (Bmessage:daemonset.apps/bind with revision #1 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:2.0 Port: Host Port: Environment: Mounts: Volumes: has:daemonset.apps/bind with revision #1 Successful (Bmessage:daemonset.apps/bind with revision #1 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:2.0 Port: Host Port: Environment: Mounts: Volumes: has:Pod Template: Successful (Bmessage:daemonset.apps/bind with revision #1 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:2.0 Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/pause:2.0 Successful (Bmessage:daemonset.apps/bind with revision #2 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:latest Port: Host Port: Environment: Mounts: app: Image: registry.k8s.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: has:daemonset.apps/bind with revision #2 Successful (Bmessage:daemonset.apps/bind with revision #2 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:latest Port: Host Port: Environment: Mounts: app: Image: registry.k8s.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: has:Pod Template: Successful (Bmessage:daemonset.apps/bind with revision #2 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:latest Port: Host Port: Environment: Mounts: app: Image: registry.k8s.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/pause:latest Successful (Bmessage:daemonset.apps/bind with revision #2 Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:latest Port: Host Port: Environment: Mounts: app: Image: registry.k8s.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/nginx:test-cmd daemonset.apps/bind will roll back to Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: registry.k8s.io/pause:2.0 Port: Host Port: Environment: Mounts: Volumes: (dry run) daemonset.apps/bind rolled back (server dry run) apps.sh:106: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest: (Bapps.sh:107: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:108: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bdaemonset.apps/bind rolled back apps.sh:111: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0: (Bapps.sh:112: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BSuccessful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:daemonset.apps/bind Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:REVISION CHANGE-CAUSE Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:2 kubectl apply Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:3 kubectl apply Successful (Bmessage:error: unable to find specified revision 1000000 in history has:unable to find specified revision apps.sh:122: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0: (Bapps.sh:123: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (Bdaemonset.apps/bind rolled back apps.sh:126: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest: (Bapps.sh:127: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:128: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (BSuccessful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:daemonset.apps/bind Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:REVISION CHANGE-CAUSE Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:3 kubectl apply Successful (Bmessage:daemonset.apps/bind REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:4 kubectl apply daemonset.apps "bind" deleted +++ exit code: 0 Recording: run_rc_tests Running command: run_rc_tests +++ Running case: test-cmd.run_rc_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_rc_tests +++ [0318 12:55:06] Creating namespace namespace-1679144106-6445 namespace/namespace-1679144106-6445 created Context "test" modified. +++ [0318 12:55:06] Testing kubectl(v1:replicationcontrollers) core.sh:1205: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0318 12:55:06.762629 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-g7jph" I0318 12:55:06.780104 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-7gnsf" I0318 12:55:06.780189 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-ncdzk" replicationcontroller "frontend" deleted E0318 12:55:06.830910 23056 replica_set.go:544] sync "namespace-1679144106-6445/frontend" failed with replicationcontrollers "frontend" not found core.sh:1210: Successful get pods -l name=frontend {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1214: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0318 12:55:07.193608 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-7wlnr" I0318 12:55:07.212027 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-j29fz" I0318 12:55:07.212060 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-r9dcb" core.sh:1218: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bmatched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: matched GET_HOSTS_FROM: core.sh:1220: Successful describe rc frontend: Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: frontend-7wlnr Normal SuccessfulCreate 0s replication-controller Created pod: frontend-j29fz Normal SuccessfulCreate 0s replication-controller Created pod: frontend-r9dcb (Bcore.sh:1222: Successful describe Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: frontend-7wlnr Normal SuccessfulCreate 0s replication-controller Created pod: frontend-j29fz Normal SuccessfulCreate 0s replication-controller Created pod: frontend-r9dcb (B core.sh:1224: Successful describe Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (B core.sh:1226: Successful describe Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: frontend-7wlnr Normal SuccessfulCreate 0s replication-controller Created pod: frontend-j29fz Normal SuccessfulCreate 0s replication-controller Created pod: frontend-r9dcb (B matched Name: matched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: matched GET_HOSTS_FROM: Successful describe rc: Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: frontend-7wlnr Normal SuccessfulCreate 0s replication-controller Created pod: frontend-j29fz Normal SuccessfulCreate 0s replication-controller Created pod: frontend-r9dcb (BSuccessful describe Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: frontend-7wlnr Normal SuccessfulCreate 0s replication-controller Created pod: frontend-j29fz Normal SuccessfulCreate 0s replication-controller Created pod: frontend-r9dcb (BW0318 12:55:07.776753 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:55:07.776794 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful describe Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (BSuccessful describe Name: frontend Namespace: namespace-1679144106-6445 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: frontend-7wlnr Normal SuccessfulCreate 0s replication-controller Created pod: frontend-j29fz Normal SuccessfulCreate 0s replication-controller Created pod: frontend-r9dcb (Bquery for replicationcontrollers had limit param query for events had limit param query for replicationcontrollers had user-specified limit param Successful describe replicationcontrollers verbose logs: I0318 12:55:07.913062 46887 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:55:07.919341 46887 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0318 12:55:07.925550 46887 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144106-6445/replicationcontrollers?limit=500 200 OK in 1 milliseconds I0318 12:55:07.927721 46887 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144106-6445/replicationcontrollers/frontend 200 OK in 1 milliseconds I0318 12:55:07.930847 46887 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144106-6445/pods?labelSelector=app%3Dguestbook%2Ctier%3Dfrontend&limit=500 200 OK in 1 milliseconds I0318 12:55:07.933049 46887 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144106-6445/events?fieldSelector=involvedObject.kind%3DReplicationController%2CinvolvedObject.uid%3Db2e20874-029f-4d51-b0b8-9a9cb9419a61%2CinvolvedObject.name%3Dfrontend%2CinvolvedObject.namespace%3Dnamespace-1679144106-6445&limit=500 200 OK in 1 milliseconds (Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3 (Breplicationcontroller/frontend scaled E0318 12:55:08.147822 23056 replica_set.go:220] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend namespace-1679144106-6445 b2e20874-029f-4d51-b0b8-9a9cb9419a61 2265 2 2023-03-18 12:55:07 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2023-03-18 12:55:07 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2023-03-18 12:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] [] [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{104857600 0} {} 100Mi BinarySI}] []} [] [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003300a48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} I0318 12:55:08.175471 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-r9dcb" core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2 (Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2 (Berror: Expected replicas to be 3, was 2 core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2 (Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2 (Breplicationcontroller/frontend scaled I0318 12:55:08.573438 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-q2f9b" core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3 (Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3 (Breplicationcontroller/frontend scaled E0318 12:55:08.765159 23056 replica_set.go:220] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend namespace-1679144106-6445 b2e20874-029f-4d51-b0b8-9a9cb9419a61 2276 4 2023-03-18 12:55:07 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kubectl-create Update v1 2023-03-18 12:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update v1 2023-03-18 12:55:08 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] [] [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{104857600 0} {} 100Mi BinarySI}] []} [] [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c20038 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} I0318 12:55:08.793929 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-q2f9b" core.sh:1268: Successful get rc frontend {{.spec.replicas}}: 2 (Breplicationcontroller "frontend" deleted replicationcontroller/redis-master created I0318 12:55:09.156073 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-s8zck" replicationcontroller/redis-slave created I0318 12:55:09.373072 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-l8drv" I0318 12:55:09.390242 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-nn5rm" Successful (Bmessage:replicationcontroller/redis-master scaled (dry run) replicationcontroller/redis-slave scaled (dry run) has:replicationcontroller/redis-master scaled (dry run) Successful (Bmessage:replicationcontroller/redis-master scaled (dry run) replicationcontroller/redis-slave scaled (dry run) has:replicationcontroller/redis-slave scaled (dry run) core.sh:1280: Successful get rc redis-master {{.spec.replicas}}: 1 (Bcore.sh:1281: Successful get rc redis-slave {{.spec.replicas}}: 2 (BSuccessful (Bmessage:replicationcontroller/redis-master scaled (server dry run) replicationcontroller/redis-slave scaled (server dry run) has:replicationcontroller/redis-master scaled (server dry run) Successful (Bmessage:replicationcontroller/redis-master scaled (server dry run) replicationcontroller/redis-slave scaled (server dry run) has:replicationcontroller/redis-slave scaled (server dry run) core.sh:1287: Successful get rc redis-master {{.spec.replicas}}: 1 (Bcore.sh:1288: Successful get rc redis-slave {{.spec.replicas}}: 2 (Breplicationcontroller/redis-master scaled replicationcontroller/redis-slave scaled I0318 12:55:09.940675 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-xlljb" I0318 12:55:09.940710 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-w724t" core.sh:1292: Successful get rc redis-master {{.spec.replicas}}: 4 (BI0318 12:55:09.958108 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-qp9nd" I0318 12:55:09.958411 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-nmvtm" I0318 12:55:09.958439 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-snj5z" core.sh:1293: Successful get rc redis-slave {{.spec.replicas}}: 4 (Breplicationcontroller "redis-master" deleted replicationcontroller "redis-slave" deleted W0318 12:55:10.124366 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:55:10.124404 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource deployment.apps/nginx-deployment created I0318 12:55:10.330216 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3" I0318 12:55:10.341136 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-bd2x5" I0318 12:55:10.355594 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-9ccxd" I0318 12:55:10.355958 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-5s5jk" Successful (Bmessage:deployment.apps/nginx-deployment scaled (dry run) has:nginx-deployment scaled (dry run) core.sh:1303: Successful get deployment nginx-deployment {{.spec.replicas}}: 3 (BSuccessful (Bmessage:deployment.apps/nginx-deployment scaled (server dry run) has:nginx-deployment scaled (server dry run) core.sh:1308: Successful get deployment nginx-deployment {{.spec.replicas}}: 3 (Bdeployment.apps/nginx-deployment scaled I0318 12:55:10.681176 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-7df65dc9f4 to 1 from 3" I0318 12:55:10.712302 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-7df65dc9f4-bd2x5" I0318 12:55:10.749934 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-7df65dc9f4-9ccxd" core.sh:1312: Successful get deployment nginx-deployment {{.spec.replicas}}: 1 (Bdeployment.apps "nginx-deployment" deleted deployment.apps/nginx-deployment created I0318 12:55:11.041826 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3" I0318 12:55:11.062467 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-x86fs" I0318 12:55:11.071688 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-bm8td" I0318 12:55:11.078258 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-2ctjs" deployment.apps/nginx-deployment scaled I0318 12:55:11.147896 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-7df65dc9f4 to 2 from 3" I0318 12:55:11.170316 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-7df65dc9f4-2ctjs" core.sh:1321: Successful get deployment nginx-deployment {{.spec.replicas}}: 2 (Bdeployment.apps "nginx-deployment" deleted I0318 12:55:11.366646 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/expose-test-deployment" clusterIPs=map[IPv4:10.0.0.52] Successful (Bmessage:service/expose-test-deployment exposed has:service/expose-test-deployment exposed service "expose-test-deployment" deleted Successful (Bmessage:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed has:invalid deployment: no selectors deployment.apps/nginx-deployment created I0318 12:55:11.752180 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3" I0318 12:55:11.770965 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-lz782" I0318 12:55:11.782957 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-lzbzr" I0318 12:55:11.813611 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-hqqrn" core.sh:1340: Successful get deployment nginx-deployment {{.spec.replicas}}: 3 (BI0318 12:55:11.900494 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/nginx-deployment" clusterIPs=map[IPv4:10.0.0.240] service/nginx-deployment exposed core.sh:1344: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80 (Bdeployment.apps "nginx-deployment" deleted service "nginx-deployment" deleted replicationcontroller/frontend created I0318 12:55:12.349134 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-cn7xf" I0318 12:55:12.366455 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-z9zkg" I0318 12:55:12.366489 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-zrzpx" core.sh:1351: Successful get rc frontend {{.spec.replicas}}: 3 (BI0318 12:55:12.520921 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/frontend" clusterIPs=map[IPv4:10.0.0.44] service/frontend exposed core.sh:1355: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (BI0318 12:55:12.670002 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/frontend-2" clusterIPs=map[IPv4:10.0.0.37] service/frontend-2 exposed core.sh:1359: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 443 (Bpod/valid-pod created I0318 12:55:13.051545 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/frontend-3" clusterIPs=map[IPv4:10.0.0.34] service/frontend-3 exposed core.sh:1364: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 444 (BI0318 12:55:13.215305 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/frontend-4" clusterIPs=map[IPv4:10.0.0.38] service/frontend-4 exposed core.sh:1368: Successful get service frontend-4 {{(index .spec.ports 0).port}}: 80 (Bpod "valid-pod" deleted service "frontend" deleted service "frontend-2" deleted service "frontend-3" deleted service "frontend-4" deleted Successful (Bmessage:error: cannot expose a Node has:cannot expose Successful (Bmessage:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters has:metadata.name: Invalid value I0318 12:55:13.809638 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs=map[IPv4:10.0.0.221] Successful (Bmessage:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted I0318 12:55:14.015114 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144106-6445/etcd-server" clusterIPs=map[IPv4:10.0.0.124] Successful (Bmessage:service/etcd-server exposed has:etcd-server exposed core.sh:1398: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380 (Bcore.sh:1399: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379 (Bservice "etcd-server" deleted core.sh:1405: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Breplicationcontroller "frontend" deleted core.sh:1409: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1413: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0318 12:55:14.783690 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-b2hfp" I0318 12:55:14.800449 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-blw78" I0318 12:55:14.800484 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-sg92s" replicationcontroller/redis-slave created I0318 12:55:15.014042 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-6pmjn" I0318 12:55:15.035146 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-6nk2w" core.sh:1418: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Bcore.sh:1422: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Breplicationcontroller "frontend" deleted replicationcontroller "redis-slave" deleted core.sh:1426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0318 12:55:15.635889 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-77frd" I0318 12:55:15.652617 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-mnxgg" I0318 12:55:15.652645 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-x4gt2" core.sh:1433: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled core.sh:1436: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70 (Bhorizontalpodautoscaler.autoscaling "frontend" deleted horizontalpodautoscaler.autoscaling/frontend autoscaled core.sh:1440: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80 (Bhorizontalpodautoscaler.autoscaling "frontend" deleted error: required flag(s) "max" not set replicationcontroller "frontend" deleted core.sh:1449: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (BapiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: name: nginx-deployment-resources name: nginx-deployment-resources spec: replicas: 3 selector: matchLabels: name: nginx strategy: {} template: metadata: creationTimestamp: null labels: name: nginx spec: containers: - image: registry.k8s.io/nginx:test-cmd name: nginx ports: - containerPort: 80 resources: {} - image: registry.k8s.io/perl name: perl resources: limits: cpu: 300m requests: cpu: 300m terminationGracePeriodSeconds: 0 status: {} Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found deployment.apps/nginx-deployment-resources created I0318 12:55:16.630085 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-5f79767bf9 to 3" I0318 12:55:16.663659 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-wfdjt" I0318 12:55:16.680106 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-hzlgx" I0318 12:55:16.680265 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-86dwd" core.sh:1455: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources: (Bcore.sh:1456: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bcore.sh:1457: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Bdeployment.apps/nginx-deployment-resources resource requirements updated I0318 12:55:16.948068 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-77d775b4f9 to 1" I0318 12:55:16.967111 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-77d775b4f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-77d775b4f9-dd8p5" core.sh:1460: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m: (Bcore.sh:1461: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m: (Berror: unable to find container named redis deployment.apps/nginx-deployment-resources resource requirements updated I0318 12:55:17.281464 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-5f79767bf9 to 2 from 3" I0318 12:55:17.303255 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-688f8b78b5 to 1 from 0" core.sh:1466: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m: (BI0318 12:55:17.310446 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-5f79767bf9-hzlgx" I0318 12:55:17.317195 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-688f8b78b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-688f8b78b5-gtq6f" core.sh:1467: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m: (Bdeployment.apps/nginx-deployment-resources resource requirements updated I0318 12:55:17.512930 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-5f79767bf9 to 1 from 2" I0318 12:55:17.539402 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-5f79767bf9-wfdjt" core.sh:1470: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m: (BI0318 12:55:17.546171 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-76bd9bdfc8 to 1 from 0" I0318 12:55:17.561356 23056 event.go:307] "Event occurred" object="namespace-1679144106-6445/nginx-deployment-resources-76bd9bdfc8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-76bd9bdfc8-hj9xw" core.sh:1471: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m: (Bcore.sh:1472: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m: (BapiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "4" creationTimestamp: "2023-03-18T12:55:16Z" generation: 4 labels: name: nginx-deployment-resources name: nginx-deployment-resources namespace: namespace-1679144106-6445 resourceVersion: "2622" uid: 4518f612-7d99-4d46-ac47-fcc58ac3504f spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: name: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: nginx spec: containers: - image: registry.k8s.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 200m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - image: registry.k8s.io/perl imagePullPolicy: Always name: perl resources: limits: cpu: 400m requests: cpu: 400m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0 status: conditions: - lastTransitionTime: "2023-03-18T12:55:16Z" lastUpdateTime: "2023-03-18T12:55:16Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2023-03-18T12:55:16Z" lastUpdateTime: "2023-03-18T12:55:17Z" message: ReplicaSet "nginx-deployment-resources-76bd9bdfc8" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 4 replicas: 4 unavailableReplicas: 4 updatedReplicas: 1 apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "4" creationTimestamp: "2023-03-18T12:55:16Z" generation: 5 labels: name: nginx-deployment-resources name: nginx-deployment-resources namespace: namespace-1679144106-6445 resourceVersion: "2622" uid: 4518f612-7d99-4d46-ac47-fcc58ac3504f spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: name: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: nginx spec: containers: - image: registry.k8s.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 200m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - image: registry.k8s.io/perl imagePullPolicy: Always name: perl resources: limits: cpu: 400m requests: cpu: 400m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0 status: conditions: - lastTransitionTime: "2023-03-18T12:55:16Z" lastUpdateTime: "2023-03-18T12:55:16Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2023-03-18T12:55:16Z" lastUpdateTime: "2023-03-18T12:55:17Z" message: ReplicaSet "nginx-deployment-resources-76bd9bdfc8" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 4 replicas: 4 unavailableReplicas: 4 updatedReplicas: 1 error: you must specify resources by --filename when --local is set. Example resource specifications include: '-f rsrc.yaml' '--filename=rsrc.json' core.sh:1477: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m: (Bcore.sh:1478: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m: (Bcore.sh:1479: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m: (Bdeployment.apps "nginx-deployment-resources" deleted +++ exit code: 0 Recording: run_deployment_tests Running command: run_deployment_tests +++ Running case: test-cmd.run_deployment_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_deployment_tests +++ [0318 12:55:18] Creating namespace namespace-1679144118-22967 namespace/namespace-1679144118-22967 created Context "test" modified. +++ [0318 12:55:18] Testing deployments deployment.apps/test-nginx-extensions created I0318 12:55:18.419257 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/test-nginx-extensions" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-nginx-extensions-7c5769b76 to 1" I0318 12:55:18.443687 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/test-nginx-extensions-7c5769b76" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-nginx-extensions-7c5769b76-l9pxs" apps.sh:220: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx (BSuccessful (Bmessage:10 has not:2 Successful (Bmessage:apps/v1 has:apps/v1 deployment.apps "test-nginx-extensions" deleted deployment.apps/test-nginx-apps created I0318 12:55:18.795054 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/test-nginx-apps" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-nginx-apps-859689d794 to 1" I0318 12:55:18.811669 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/test-nginx-apps-859689d794" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-nginx-apps-859689d794-jgd5p" apps.sh:233: Successful get deploy test-nginx-apps {{(index .spec.template.spec.containers 0).name}}: nginx (BSuccessful (Bmessage:10 has:10 Successful (Bmessage:apps/v1 has:apps/v1 matched Name: matched Pod Template: matched Labels: matched Selector: matched Controlled By matched Replicas: matched Pods Status: matched Volumes: Successful describe rs: Name: test-nginx-apps-859689d794 Namespace: namespace-1679144118-22967 Selector: app=test-nginx-apps,pod-template-hash=859689d794 Labels: app=test-nginx-apps pod-template-hash=859689d794 Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/test-nginx-apps Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=test-nginx-apps pod-template-hash=859689d794 Containers: nginx: Image: registry.k8s.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: test-nginx-apps-859689d794-jgd5p (Bmatched Name: matched Image: matched Node: matched Labels: matched Status: matched Controlled By Successful describe pods: Name: test-nginx-apps-859689d794-jgd5p Namespace: namespace-1679144118-22967 Priority: 0 Node: Labels: app=test-nginx-apps pod-template-hash=859689d794 Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/test-nginx-apps-859689d794 Containers: nginx: Image: registry.k8s.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: (Bquery for deployments had limit param query for replicasets had limit param query for events had limit param query for deployments had user-specified limit param Successful describe deployments verbose logs: I0318 12:55:19.214327 48611 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:55:19.220883 48611 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 6 milliseconds I0318 12:55:19.226680 48611 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144118-22967/deployments?limit=500 200 OK in 2 milliseconds I0318 12:55:19.230648 48611 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144118-22967/deployments/test-nginx-apps 200 OK in 1 milliseconds I0318 12:55:19.234402 48611 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144118-22967/events?fieldSelector=involvedObject.uid%3Dbcb50a66-1977-469a-9ef1-d0cb575881c0%2CinvolvedObject.name%3Dtest-nginx-apps%2CinvolvedObject.namespace%3Dnamespace-1679144118-22967%2CinvolvedObject.kind%3DDeployment&limit=500 200 OK in 2 milliseconds I0318 12:55:19.236414 48611 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144118-22967/replicasets?labelSelector=app%3Dtest-nginx-apps&limit=500 200 OK in 1 milliseconds (Bdeployment.apps "test-nginx-apps" deleted apps.sh:251: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-with-command created (dry run) deployment.apps/nginx-with-command created (server dry run) apps.sh:255: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-with-command created I0318 12:55:19.746161 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-with-command" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-with-command-8b8d9b79b to 1" I0318 12:55:19.763160 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-with-command-8b8d9b79b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-with-command-8b8d9b79b-fjxpd" apps.sh:259: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx (Bdeployment.apps "nginx-with-command" deleted apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/deployment-with-unixuserid created I0318 12:55:20.157772 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/deployment-with-unixuserid" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deployment-with-unixuserid-5885495f7 to 1" I0318 12:55:20.175914 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/deployment-with-unixuserid-5885495f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-with-unixuserid-5885495f7-nd6pr" apps.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid: (Bdeployment.apps "deployment-with-unixuserid" deleted apps.sh:276: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0318 12:55:20.574944 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3" I0318 12:55:20.594542 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-lpgt4" I0318 12:55:20.611612 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-h562f" I0318 12:55:20.611648 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-rgd45" apps.sh:280: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3 (Bdeployment.apps "nginx-deployment" deleted apps.sh:284: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:288: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:289: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0318 12:55:21.011205 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-55d75fc84b to 1" I0318 12:55:21.021635 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-55d75fc84b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-55d75fc84b-7jmr7" apps.sh:293: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1 (Bdeployment.apps "nginx-deployment" deleted apps.sh:298: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:299: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1 (Breplicaset.apps "nginx-deployment-55d75fc84b" deleted apps.sh:307: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:309: Successful get hpa {{range.items}}{{ if eq .metadata.name "nginx-deployment" }}found{{end}}{{end}}:: : (Bdeployment.apps/nginx-deployment created I0318 12:55:21.863212 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3" I0318 12:55:21.901387 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-w77tj" I0318 12:55:21.917171 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-wlc8c" I0318 12:55:21.927133 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-zwjj6" apps.sh:312: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bhorizontalpodautoscaler.autoscaling/nginx-deployment created (dry run) horizontalpodautoscaler.autoscaling/nginx-deployment autoscaled (server dry run) apps.sh:316: Successful get hpa {{range.items}}{{ if eq .metadata.name "nginx-deployment" }}found{{end}}{{end}}:: : (Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled apps.sh:319: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80 (Bquery for horizontalpodautoscalers had limit param query for events had limit param query for horizontalpodautoscalers had user-specified limit param Successful describe horizontalpodautoscalers verbose logs: I0318 12:55:22.339020 49108 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:55:22.344344 49108 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:55:22.350706 49108 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/namespace-1679144118-22967/horizontalpodautoscalers?limit=500 200 OK in 1 milliseconds I0318 12:55:22.352892 49108 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/namespace-1679144118-22967/horizontalpodautoscalers/nginx-deployment 200 OK in 1 milliseconds I0318 12:55:22.356042 49108 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144118-22967/events?fieldSelector=involvedObject.name%3Dnginx-deployment%2CinvolvedObject.namespace%3Dnamespace-1679144118-22967%2CinvolvedObject.kind%3DHorizontalPodAutoscaler%2CinvolvedObject.uid%3D5eea6943-b884-46ed-8e60-9e4a95ff4d4d&limit=500 200 OK in 2 milliseconds (Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted deployment.apps "nginx-deployment" deleted apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx created I0318 12:55:22.859996 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-77566b75db to 3" I0318 12:55:22.894904 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-77566b75db-vkwrz" I0318 12:55:22.908437 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-77566b75db-5x4nt" I0318 12:55:22.915842 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-77566b75db-g5qpj" apps.sh:333: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx: (Bapps.sh:334: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bdeployment.apps/nginx skipped rollback (current template already matches revision 1) apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (BW0318 12:55:23.323921 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:55:23.324301 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Warning: resource deployments/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. deployment.apps/nginx configured I0318 12:55:23.363604 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6b9cd9ccf6 to 1" I0318 12:55:23.381429 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-6b9cd9ccf6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6b9cd9ccf6-c8st8" apps.sh:340: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (B Image: registry.k8s.io/nginx:test-cmd deployment.apps/nginx rolled back (server dry run) apps.sh:344: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (Bdeployment.apps/nginx rolled back apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Berror: unable to find specified revision 1000000 in history apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bdeployment.apps/nginx rolled back apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (Bdeployment.apps/nginx paused error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume' and try again error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first) deployment.apps/nginx resumed deployment.apps/nginx rolled back deployment.kubernetes.io/revision-history: 1,3 error: desired revision (3) is different from the running revision (5) deployment.apps/nginx restarted I0318 12:55:26.858528 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-77566b75db to 2 from 3" I0318 12:55:26.882885 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-77566b75db" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-77566b75db-vkwrz" I0318 12:55:26.909859 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-68f555695f to 1 from 0" I0318 12:55:26.927687 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-68f555695f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-68f555695f-j8h62" Successful (Bmessage:apiVersion: apps/v1 kind: ReplicaSet metadata: annotations: deployment.kubernetes.io/desired-replicas: "3" deployment.kubernetes.io/max-replicas: "4" deployment.kubernetes.io/revision: "6" creationTimestamp: "2023-03-18T12:55:26Z" generation: 2 labels: name: nginx-undo pod-template-hash: 68f555695f name: nginx-68f555695f namespace: namespace-1679144118-22967 ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: Deployment name: nginx uid: 53af4103-00cb-40b2-9ebd-997b0e22ff58 resourceVersion: "2824" uid: 92ec5df1-f4e2-4f8c-9e33-704ac7fa75e6 spec: replicas: 1 selector: matchLabels: name: nginx-undo pod-template-hash: 68f555695f template: metadata: annotations: kubectl.kubernetes.io/restartedAt: "2023-03-18T12:55:26Z" creationTimestamp: null labels: name: nginx-undo pod-template-hash: 68f555695f spec: containers: - image: registry.k8s.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: fullyLabeledReplicas: 1 observedGeneration: 2 replicas: 1 has:deployment.kubernetes.io/revision: "6" Successful (Bmessage:kubectl-create kubectl-client-side-apply kube-controller-manager kubectl kubectl-rollout has:kubectl-rollout deployment.apps/nginx2 created I0318 12:55:28.228246 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx2-5744c8b44d to 3" I0318 12:55:28.259883 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx2-5744c8b44d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-5744c8b44d-9rfxk" I0318 12:55:28.272233 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx2-5744c8b44d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-5744c8b44d-sjxk7" I0318 12:55:28.279154 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx2-5744c8b44d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-5744c8b44d-cm6z2" deployment.apps "nginx2" deleted deployment.apps "nginx" deleted apps.sh:389: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0318 12:55:28.680636 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-57bf7fbc68 to 3" I0318 12:55:28.719226 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-6j9b4" I0318 12:55:28.739249 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-4srrh" I0318 12:55:28.739281 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-vq6qd" apps.sh:392: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bapps.sh:393: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:394: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Bdeployment.apps/nginx-deployment image updated (dry run) deployment.apps/nginx-deployment image updated (server dry run) apps.sh:398: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:399: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Bdeployment.apps/nginx-deployment image updated I0318 12:55:29.226126 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6444b54576 to 1" I0318 12:55:29.262782 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-6444b54576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6444b54576-rz6w9" apps.sh:402: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (Bapps.sh:403: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Berror: unable to find container named "redis" deployment.apps/nginx-deployment image updated apps.sh:408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:409: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Bdeployment.apps/nginx-deployment image updated apps.sh:412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (Bapps.sh:413: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Bapps.sh:416: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (Bapps.sh:417: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (Bdeployment.apps/nginx-deployment image updated I0318 12:55:30.185784 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-57bf7fbc68 to 2 from 3" apps.sh:420: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (BI0318 12:55:30.215945 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-567698c558 to 1 from 0" I0318 12:55:30.221957 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-57bf7fbc68-6j9b4" I0318 12:55:30.229724 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-567698c558" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-567698c558-zn8g2" apps.sh:421: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bapps.sh:425: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (Bdeployment.apps "nginx-deployment" deleted apps.sh:431: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:30.759853 23056 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1679144106-6445/frontend" deployment.apps/nginx-deployment created I0318 12:55:30.881164 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-57bf7fbc68 to 3" I0318 12:55:30.897978 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-xpzl8" I0318 12:55:30.909152 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-rrrjf" I0318 12:55:30.915938 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-8lmvr" configmap/test-set-env-config created secret/test-set-env-secret created apps.sh:436: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bapps.sh:438: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config (Bapps.sh:439: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret: (BWarning: key key-2 transferred to KEY_2 deployment.apps/nginx-deployment env updated I0318 12:55:31.622884 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-d588bb564 to 1" I0318 12:55:31.641300 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-d588bb564" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-d588bb564-z2m2s" apps.sh:443: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2 (Bapps.sh:445: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1 (BWarning: key key-1 transferred to KEY_1 Warning: key key-2 transferred to KEY_2 deployment.apps/nginx-deployment env updated (dry run) Warning: key key-2 transferred to KEY_2 Warning: key key-1 transferred to KEY_1 deployment.apps/nginx-deployment env updated (server dry run) apps.sh:449: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1 (BWarning: key key-1 transferred to KEY_1 Warning: key key-2 transferred to KEY_2 deployment.apps/nginx-deployment env updated I0318 12:55:32.081671 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-57bf7fbc68 to 2 from 3" apps.sh:453: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2 (BI0318 12:55:32.128261 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6bf769bd to 1 from 0" I0318 12:55:32.134233 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-57bf7fbc68-8lmvr" I0318 12:55:32.141330 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-6bf769bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6bf769bd-cmgvf" deployment.apps/nginx-deployment env updated I0318 12:55:32.277565 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-57bf7fbc68 to 1 from 2" I0318 12:55:32.307757 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6bdc9df444 to 1 from 0" I0318 12:55:32.313819 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-57bf7fbc68-xpzl8" I0318 12:55:32.320942 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-6bdc9df444" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6bdc9df444-fprw9" deployment.apps/nginx-deployment env updated I0318 12:55:32.410530 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-57bf7fbc68 to 0 from 1" Warning: key username transferred to USERNAME I0318 12:55:32.455950 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5446b4888c to 1 from 0" deployment.apps/nginx-deployment env updated I0318 12:55:32.522776 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-d588bb564 to 0 from 1" E0318 12:55:32.531802 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-57bf7fbc68" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-57bf7fbc68": the object has been modified; please apply your changes to the latest version and try again Warning: key password transferred to PASSWORD Warning: key username transferred to USERNAME I0318 12:55:32.556982 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-694d45dfd5 to 1 from 0" deployment.apps/nginx-deployment env updated I0318 12:55:32.583992 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-5446b4888c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5446b4888c-n86q6" deployment.apps/nginx-deployment env updated Successful (Bmessage:error: standard input cannot be used for multiple arguments has:standard input cannot be used for multiple arguments I0318 12:55:32.734430 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-d588bb564" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-d588bb564-z2m2s" I0318 12:55:32.756371 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-57bf7fbc68-rrrjf" deployment.apps "nginx-deployment" deleted E0318 12:55:32.881885 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-694d45dfd5" failed with replicasets.apps "nginx-deployment-694d45dfd5" not found configmap "test-set-env-config" deleted secret "test-set-env-secret" deleted apps.sh:474: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (BE0318 12:55:33.125038 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-5446b4888c" failed with replicasets.apps "nginx-deployment-5446b4888c" not found E0318 12:55:33.174872 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-d588bb564" failed with replicasets.apps "nginx-deployment-d588bb564" not found E0318 12:55:33.228576 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-56795f96bc" failed with replicasets.apps "nginx-deployment-56795f96bc" not found deployment.apps/nginx-deployment created I0318 12:55:33.251077 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-57bf7fbc68 to 3" apps.sh:477: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bapps.sh:478: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd: (BI0318 12:55:33.434795 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-jhlzx" apps.sh:479: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (BI0318 12:55:33.483841 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-xt5nn" deployment.apps/nginx-deployment image updated I0318 12:55:33.543204 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6444b54576 to 1" I0318 12:55:33.592354 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-q2m7q" apps.sh:482: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9: (Bapps.sh:483: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl: (BI0318 12:55:33.685970 23056 event.go:307] "Event occurred" object="namespace-1679144118-22967/nginx-deployment-6444b54576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6444b54576-7hpjx" Successful (Bmessage:deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 2 has:deployment.apps/nginx-deployment Successful (Bmessage:deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 2 has:REVISION CHANGE-CAUSE Successful (Bmessage:deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 2 has:1 Successful (Bmessage:deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 2 has:2 Successful (Bmessage:deployment.apps/nginx-deployment REVISION CHANGE-CAUSE 1 2 has not:3 Successful (Bmessage:deployment.apps/nginx-deployment with revision #1 Pod Template: Labels: name=nginx pod-template-hash=57bf7fbc68 Containers: nginx: Image: registry.k8s.io/nginx:test-cmd Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:deployment.apps/nginx-deployment with revision #1 Successful (Bmessage:deployment.apps/nginx-deployment with revision #1 Pod Template: Labels: name=nginx pod-template-hash=57bf7fbc68 Containers: nginx: Image: registry.k8s.io/nginx:test-cmd Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:Pod Template: Successful (Bmessage:deployment.apps/nginx-deployment with revision #1 Pod Template: Labels: name=nginx pod-template-hash=57bf7fbc68 Containers: nginx: Image: registry.k8s.io/nginx:test-cmd Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/nginx:test-cmd Successful (Bmessage:deployment.apps/nginx-deployment with revision #1 Pod Template: Labels: name=nginx pod-template-hash=57bf7fbc68 Containers: nginx: Image: registry.k8s.io/nginx:test-cmd Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/perl Successful (Bmessage:deployment.apps/nginx-deployment with revision #2 Pod Template: Labels: name=nginx pod-template-hash=6444b54576 Containers: nginx: Image: registry.k8s.io/nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:deployment.apps/nginx-deployment with revision #2 Successful (Bmessage:deployment.apps/nginx-deployment with revision #2 Pod Template: Labels: name=nginx pod-template-hash=6444b54576 Containers: nginx: Image: registry.k8s.io/nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:Pod Template: Successful (Bmessage:deployment.apps/nginx-deployment with revision #2 Pod Template: Labels: name=nginx pod-template-hash=6444b54576 Containers: nginx: Image: registry.k8s.io/nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/nginx:1.7.9 Successful (Bmessage:deployment.apps/nginx-deployment with revision #2 Pod Template: Labels: name=nginx pod-template-hash=6444b54576 Containers: nginx: Image: registry.k8s.io/nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: Mounts: perl: Image: registry.k8s.io/perl Port: Host Port: Environment: Mounts: Volumes: has:registry.k8s.io/perl deployment.apps "nginx-deployment" deleted +++ exit code: 0 Recording: run_rs_tests Running command: run_rs_tests E0318 12:55:34.036750 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-57bf7fbc68" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-57bf7fbc68": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1679144118-22967/nginx-deployment-57bf7fbc68, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e92c2b65-65c2-4a0a-98b1-e631a66f6702, UID in object meta: +++ Running case: test-cmd.run_rs_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_rs_tests +++ [0318 12:55:34] Creating namespace namespace-1679144134-15522 E0318 12:55:34.077690 23056 replica_set.go:544] sync "namespace-1679144118-22967/nginx-deployment-6444b54576" failed with replicasets.apps "nginx-deployment-6444b54576" not found namespace/namespace-1679144134-15522 created Context "test" modified. +++ [0318 12:55:34] Testing kubectl(v1:replicasets) apps.sh:645: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created +++ [0318 12:55:34] Deleting rs I0318 12:55:34.515140 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-lkwrm" I0318 12:55:34.531908 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-8frl6" I0318 12:55:34.531941 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-4bd5p" E0318 12:55:34.566704 23056 replica_set.go:544] sync "namespace-1679144134-15522/frontend" failed with replicasets.apps "frontend" not found replicaset.apps "frontend" deleted E0318 12:55:34.629163 23056 replica_set.go:544] sync "namespace-1679144134-15522/frontend" failed with replicasets.apps "frontend" not found apps.sh:651: Successful get pods -l tier=frontend {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:655: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created I0318 12:55:34.908335 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-pfkrs" I0318 12:55:34.925804 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-9847x" I0318 12:55:34.926054 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-qg76d" apps.sh:659: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis: (B+++ [0318 12:55:34] Deleting rs replicaset.apps "frontend" deleted E0318 12:55:35.125425 23056 replica_set.go:544] sync "namespace-1679144134-15522/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1679144134-15522/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c213d27d-237e-4377-b4da-74f276f436b0, UID in object meta: apps.sh:663: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:665: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis: (Bpod "frontend-9847x" deleted pod "frontend-pfkrs" deleted pod "frontend-qg76d" deleted apps.sh:668: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:672: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created I0318 12:55:35.775445 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-kbwjk" I0318 12:55:35.792990 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-rtz7h" I0318 12:55:35.793025 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-mvmgp" apps.sh:676: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bmatched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: apps.sh:678: Successful describe rs frontend: Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-kbwjk Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-rtz7h Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-mvmgp (Bapps.sh:680: Successful describe Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-kbwjk Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-rtz7h Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-mvmgp (B apps.sh:682: Successful describe Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (B apps.sh:684: Successful describe Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-kbwjk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-rtz7h Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-mvmgp (B matched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: Successful describe rs: Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-kbwjk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-rtz7h Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-mvmgp (BSuccessful describe Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-kbwjk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-rtz7h Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-mvmgp (BSuccessful describe Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (BSuccessful describe Name: frontend Namespace: namespace-1679144134-15522 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-kbwjk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-rtz7h Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-mvmgp (Bmatched Name: matched Image: matched Node: matched Labels: matched Status: matched Controlled By Successful describe pods: Name: frontend-kbwjk Namespace: namespace-1679144134-15522 Priority: 0 Node: Labels: app=guestbook tier=frontend Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: QoS Class: Burstable Node-Selectors: Tolerations: Events: Name: frontend-mvmgp Namespace: namespace-1679144134-15522 Priority: 0 Node: Labels: app=guestbook tier=frontend Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: QoS Class: Burstable Node-Selectors: Tolerations: Events: Name: frontend-rtz7h Namespace: namespace-1679144134-15522 Priority: 0 Node: Labels: app=guestbook tier=frontend Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: QoS Class: Burstable Node-Selectors: Tolerations: Events: (Bquery for replicasets had limit param query for pods had limit param query for events had limit param query for replicasets had user-specified limit param Successful describe replicasets verbose logs: I0318 12:55:36.567374 50856 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:55:36.572301 50856 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:55:36.579914 50856 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144134-15522/replicasets?limit=500 200 OK in 1 milliseconds I0318 12:55:36.583117 50856 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144134-15522/replicasets/frontend 200 OK in 1 milliseconds I0318 12:55:36.586576 50856 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144134-15522/pods?labelSelector=app%3Dguestbook%2Ctier%3Dfrontend&limit=500 200 OK in 1 milliseconds I0318 12:55:36.588728 50856 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144134-15522/events?fieldSelector=involvedObject.kind%3DReplicaSet%2CinvolvedObject.uid%3Dd726100a-3fb8-4fca-852f-9cb472de600d%2CinvolvedObject.name%3Dfrontend%2CinvolvedObject.namespace%3Dnamespace-1679144134-15522&limit=500 200 OK in 1 milliseconds (Bapps.sh:700: Successful get rs frontend {{.spec.replicas}}: 3 (Breplicaset.apps/frontend scaled (dry run) replicaset.apps/frontend scaled (server dry run) apps.sh:704: Successful get rs frontend {{.spec.replicas}}: 3 (Breplicaset.apps/frontend scaled E0318 12:55:36.996714 23056 replica_set.go:220] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend namespace-1679144134-15522 d726100a-3fb8-4fca-852f-9cb472de600d 3121 2 2023-03-18 12:55:35 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update apps/v1 2023-03-18 12:55:35 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update apps/v1 2023-03-18 12:55:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v3 [] [] [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{104857600 0} {} 100Mi BinarySI}] []} [] [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001f2ac98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} I0318 12:55:37.016793 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-kbwjk" apps.sh:708: Successful get rs frontend {{.spec.replicas}}: 2 (BI0318 12:55:37.214216 23056 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1679144118-22967/nginx-deployment" deployment.apps/scale-1 created I0318 12:55:37.293674 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-1-76b9689797 to 1" I0318 12:55:37.330357 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-1-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-1-76b9689797-2snsg" deployment.apps/scale-2 created I0318 12:55:37.502947 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-2-76b9689797 to 1" I0318 12:55:37.513664 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-2-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-2-76b9689797-sbgdc" deployment.apps/scale-3 created I0318 12:55:37.722421 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-3" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-3-76b9689797 to 1" I0318 12:55:37.739647 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-3-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-76b9689797-pq4vg" apps.sh:714: Successful get deploy scale-1 {{.spec.replicas}}: 1 (Bapps.sh:715: Successful get deploy scale-2 {{.spec.replicas}}: 1 (Bapps.sh:716: Successful get deploy scale-3 {{.spec.replicas}}: 1 (Bdeployment.apps/scale-1 scaled (dry run) deployment.apps/scale-2 scaled (dry run) deployment.apps/scale-3 scaled (dry run) deployment.apps/scale-1 scaled (server dry run) deployment.apps/scale-2 scaled (server dry run) deployment.apps/scale-3 scaled (server dry run) apps.sh:720: Successful get deploy scale-1 {{.spec.replicas}}: 1 (Bapps.sh:721: Successful get deploy scale-2 {{.spec.replicas}}: 1 (Bapps.sh:722: Successful get deploy scale-3 {{.spec.replicas}}: 1 (Bdeployment.apps/scale-1 scaled deployment.apps/scale-2 scaled I0318 12:55:38.355891 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-1-76b9689797 to 2 from 1" I0318 12:55:38.355928 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-2-76b9689797 to 2 from 1" I0318 12:55:38.372920 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-2-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-2-76b9689797-tx9fw" I0318 12:55:38.373631 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-1-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-1-76b9689797-4wg7f" apps.sh:725: Successful get deploy scale-1 {{.spec.replicas}}: 2 (Bapps.sh:726: Successful get deploy scale-2 {{.spec.replicas}}: 2 (Bapps.sh:727: Successful get deploy scale-3 {{.spec.replicas}}: 1 (Bdeployment.apps/scale-1 scaled deployment.apps/scale-2 scaled I0318 12:55:38.650927 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-1-76b9689797 to 3 from 2" deployment.apps/scale-3 scaled I0318 12:55:38.658667 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-2-76b9689797 to 3 from 2" I0318 12:55:38.666684 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-1-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-1-76b9689797-72nj4" I0318 12:55:38.673579 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-3" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-3-76b9689797 to 3 from 1" I0318 12:55:38.673593 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-2-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-2-76b9689797-src97" I0318 12:55:38.704558 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-3-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-76b9689797-kzp7c" I0318 12:55:38.724344 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/scale-3-76b9689797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-76b9689797-bw5wd" apps.sh:730: Successful get deploy scale-1 {{.spec.replicas}}: 3 (Bapps.sh:731: Successful get deploy scale-2 {{.spec.replicas}}: 3 (Bapps.sh:732: Successful get deploy scale-3 {{.spec.replicas}}: 3 (Breplicaset.apps "frontend" deleted deployment.apps "scale-1" deleted deployment.apps "scale-2" deleted deployment.apps "scale-3" deleted replicaset.apps/frontend created I0318 12:55:39.314547 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gm5tz" I0318 12:55:39.331138 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-t7vbj" I0318 12:55:39.331174 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-w5x5b" apps.sh:740: Successful get rs frontend {{.spec.replicas}}: 3 (BI0318 12:55:39.460582 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144134-15522/frontend" clusterIPs=map[IPv4:10.0.0.194] service/frontend exposed apps.sh:744: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (Bservice "frontend" deleted apps.sh:750: Successful get rs frontend {{.metadata.generation}}: 1 (Breplicaset.apps/frontend image updated apps.sh:752: Successful get rs frontend {{.metadata.generation}}: 2 (Breplicaset.apps/frontend env updated apps.sh:754: Successful get rs frontend {{.metadata.generation}}: 3 (Breplicaset.apps/frontend resource requirements updated (dry run) replicaset.apps/frontend resource requirements updated (server dry run) apps.sh:757: Successful get rs frontend {{.metadata.generation}}: 3 (Breplicaset.apps/frontend resource requirements updated apps.sh:759: Successful get rs frontend {{.metadata.generation}}: 4 (Breplicaset.apps/frontend serviceaccount updated (dry run) replicaset.apps/frontend serviceaccount updated (server dry run) apps.sh:762: Successful get rs frontend {{.metadata.generation}}: 4 (Breplicaset.apps/frontend serviceaccount updated apps.sh:764: Successful get rs frontend {{.metadata.generation}}: 5 (BSuccessful (Bmessage:kubectl-create kube-controller-manager kubectl-set has:kubectl-set apps.sh:772: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Breplicaset.apps "frontend" deleted apps.sh:776: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:780: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created W0318 12:55:41.196682 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:55:41.196716 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0318 12:55:41.205397 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-fvchf" I0318 12:55:41.222301 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-lx5fw" I0318 12:55:41.222328 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gz7fz" replicaset.apps/redis-slave created I0318 12:55:41.414818 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/redis-slave" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-cgwnz" I0318 12:55:41.436184 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/redis-slave" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-hctgv" apps.sh:785: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Bapps.sh:789: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Breplicaset.apps "frontend" deleted replicaset.apps "redis-slave" deleted apps.sh:793: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:798: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created I0318 12:55:42.038116 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-vnmmw" I0318 12:55:42.071311 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-wm8kq" I0318 12:55:42.071342 23056 event.go:307] "Event occurred" object="namespace-1679144134-15522/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-fjvv2" apps.sh:801: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled apps.sh:804: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70 (Bhorizontalpodautoscaler.autoscaling "frontend" deleted horizontalpodautoscaler.autoscaling/frontend autoscaled apps.sh:808: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80 (BSuccessful (Bmessage:kubectl-autoscale has:kubectl-autoscale horizontalpodautoscaler.autoscaling "frontend" deleted error: required flag(s) "max" not set replicaset.apps "frontend" deleted +++ exit code: 0 Recording: run_stateful_set_tests Running command: run_stateful_set_tests +++ Running case: test-cmd.run_stateful_set_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_stateful_set_tests +++ [0318 12:55:42] Creating namespace namespace-1679144142-11099 namespace/namespace-1679144142-11099 created Context "test" modified. +++ [0318 12:55:42] Testing kubectl(v1:statefulsets) apps.sh:601: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:43.193115 19996 controller.go:624] quota admission added evaluator for: statefulsets.apps statefulset.apps/nginx created query for statefulsets had limit param query for pods had limit param query for events had limit param query for statefulsets had user-specified limit param Successful describe statefulsets verbose logs: I0318 12:55:43.262750 51917 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:55:43.268716 51917 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:55:43.274062 51917 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144142-11099/statefulsets?limit=500 200 OK in 1 milliseconds I0318 12:55:43.276390 51917 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1679144142-11099/statefulsets/nginx 200 OK in 1 milliseconds I0318 12:55:43.280411 51917 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144142-11099/pods?labelSelector=app%3Dnginx-statefulset&limit=500 200 OK in 1 milliseconds I0318 12:55:43.282003 51917 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144142-11099/events?fieldSelector=involvedObject.namespace%3Dnamespace-1679144142-11099%2CinvolvedObject.kind%3DStatefulSet%2CinvolvedObject.uid%3Db970738b-8266-4074-9a0c-1b47b19258bd%2CinvolvedObject.name%3Dnginx&limit=500 200 OK in 1 milliseconds (Bapps.sh:610: Successful get statefulset nginx {{.spec.replicas}}: 0 (Bapps.sh:611: Successful get statefulset nginx {{.status.observedGeneration}}: 1 (Bstatefulset.apps/nginx scaled I0318 12:55:43.573972 23056 event.go:307] "Event occurred" object="namespace-1679144142-11099/nginx" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod nginx-0 in StatefulSet nginx successful" apps.sh:615: Successful get statefulset nginx {{.spec.replicas}}: 1 (Bapps.sh:616: Successful get statefulset nginx {{.status.observedGeneration}}: 2 (Bstatefulset.apps/nginx restarted apps.sh:624: Successful get statefulset nginx {{.status.observedGeneration}}: 3 (Bstatefulset.apps "nginx" deleted I0318 12:55:43.985896 23056 stateful_set.go:458] "StatefulSet has been deleted" key="namespace-1679144142-11099/nginx" +++ exit code: 0 Recording: run_statefulset_history_tests Running command: run_statefulset_history_tests +++ Running case: test-cmd.run_statefulset_history_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_statefulset_history_tests +++ [0318 12:55:44] Creating namespace namespace-1679144144-13056 namespace/namespace-1679144144-13056 created Context "test" modified. +++ [0318 12:55:44] Testing kubectl(v1:statefulsets, v1:controllerrevisions) apps.sh:519: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: (BFlag --record has been deprecated, --record will be removed in the future statefulset.apps/nginx created W0318 12:55:44.536413 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:55:44.536453 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource apps.sh:523: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1679144144-13056"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"registry.k8s.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (Bstatefulset.apps/nginx skipped rollback (current template already matches revision 1) apps.sh:526: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7: (Bapps.sh:527: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BFlag --record has been deprecated, --record will be removed in the future statefulset.apps/nginx configured apps.sh:530: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8: (Bapps.sh:531: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0: (Bapps.sh:532: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bapps.sh:533: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1679144144-13056"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"registry.k8s.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1679144144-13056"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"registry.k8s.io/nginx-slim:0.8","name":"nginx","ports":[{"containerPort":80,"name":"web"}]},{"image":"registry.k8s.io/pause:2.0","name":"pause","ports":[{"containerPort":81,"name":"web-2"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (BSuccessful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:statefulset.apps/nginx Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:REVISION CHANGE-CAUSE Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:1 kubectl apply Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 1 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:2 kubectl apply Successful (Bmessage:statefulset.apps/nginx with revision #1 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.7 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: Volumes: has:statefulset.apps/nginx with revision #1 Successful (Bmessage:statefulset.apps/nginx with revision #1 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.7 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: Volumes: has:Pod Template: Successful (Bmessage:statefulset.apps/nginx with revision #1 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.7 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: Volumes: has:registry.k8s.io/nginx-slim:0.7 Successful (Bmessage:statefulset.apps/nginx with revision #2 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.8 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: pause: Image: registry.k8s.io/pause:2.0 Port: 81/TCP Host Port: 0/TCP Environment: Mounts: Volumes: has:statefulset.apps/nginx with revision #2 Successful (Bmessage:statefulset.apps/nginx with revision #2 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.8 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: pause: Image: registry.k8s.io/pause:2.0 Port: 81/TCP Host Port: 0/TCP Environment: Mounts: Volumes: has:Pod Template: Successful (Bmessage:statefulset.apps/nginx with revision #2 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.8 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: pause: Image: registry.k8s.io/pause:2.0 Port: 81/TCP Host Port: 0/TCP Environment: Mounts: Volumes: has:registry.k8s.io/nginx-slim:0.8 Successful (Bmessage:statefulset.apps/nginx with revision #2 Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.8 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: pause: Image: registry.k8s.io/pause:2.0 Port: 81/TCP Host Port: 0/TCP Environment: Mounts: Volumes: has:registry.k8s.io/pause:2.0 statefulset.apps/nginx will roll back to Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: registry.k8s.io/nginx-slim:0.7 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: Volumes: (dry run) statefulset.apps/nginx rolled back (server dry run) apps.sh:554: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8: (Bapps.sh:555: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0: (Bapps.sh:556: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bstatefulset.apps/nginx rolled back apps.sh:559: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7: (Bapps.sh:560: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BSuccessful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:statefulset.apps/nginx Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:REVISION CHANGE-CAUSE Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:2 kubectl apply Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 2 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:3 kubectl apply Successful (Bmessage:error: unable to find specified revision 1000000 in history has:unable to find specified revision apps.sh:570: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7: (Bapps.sh:571: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (Bstatefulset.apps/nginx rolled back apps.sh:574: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8: (Bapps.sh:575: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0: (Bapps.sh:576: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (BSuccessful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:statefulset.apps/nginx Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:REVISION CHANGE-CAUSE Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:3 kubectl apply Successful (Bmessage:statefulset.apps/nginx REVISION CHANGE-CAUSE 3 kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true 4 kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true has:4 kubectl apply statefulset.apps "nginx" deleted I0318 12:55:46.787572 23056 stateful_set.go:458] "StatefulSet has been deleted" key="namespace-1679144144-13056/nginx" +++ exit code: 0 Recording: run_lists_tests Running command: run_lists_tests +++ Running case: test-cmd.run_lists_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_lists_tests +++ [0318 12:55:46] Creating namespace namespace-1679144146-22494 namespace/namespace-1679144146-22494 created Context "test" modified. +++ [0318 12:55:47] Testing kubectl(v1:lists) I0318 12:55:47.221078 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144146-22494/list-service-test" clusterIPs=map[IPv4:10.0.0.117] service/list-service-test created deployment.apps/list-deployment-test created I0318 12:55:47.249225 23056 event.go:307] "Event occurred" object="namespace-1679144146-22494/list-deployment-test" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set list-deployment-test-d8cbcf554 to 1" I0318 12:55:47.284597 23056 event.go:307] "Event occurred" object="namespace-1679144146-22494/list-deployment-test-d8cbcf554" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: list-deployment-test-d8cbcf554-k7hxb" service "list-service-test" deleted deployment.apps "list-deployment-test" deleted +++ exit code: 0 Recording: run_multi_resources_tests Running command: run_multi_resources_tests +++ Running case: test-cmd.run_multi_resources_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_multi_resources_tests +++ [0318 12:55:47] Creating namespace namespace-1679144147-2276 namespace/namespace-1679144147-2276 created Context "test" modified. +++ [0318 12:55:47] Testing kubectl(v1:multiple resources) Testing with file hack/testdata/multi-resource-yaml.yaml and replace with file hack/testdata/multi-resource-yaml-modify.yaml generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:47.950818 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.79] service/mock created replicationcontroller/mock created I0318 12:55:48.014377 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-b977l" generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mock ClusterIP 10.0.0.79 99/TCP 1s NAME DESIRED CURRENT READY AGE replicationcontroller/mock 1 1 0 1s Name: mock Namespace: namespace-1679144147-2276 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.79 IPs: 10.0.0.79 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock Namespace: namespace-1679144147-2276 Selector: app=mock Labels: app=mock Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: registry.k8s.io/pause:3.9 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-b977l service "mock" deleted replicationcontroller "mock" deleted I0318 12:55:48.618860 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.75] service/mock replaced replicationcontroller/mock replaced I0318 12:55:48.673746 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-wmknk" generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bservice/mock edited replicationcontroller/mock edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bservice/mock labeled replicationcontroller/mock labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bservice/mock annotate replicationcontroller/mock annotate generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted replicationcontroller "mock" deleted Testing with file hack/testdata/multi-resource-list.json and replace with file hack/testdata/multi-resource-list-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:50.070556 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.54] service/mock created replicationcontroller/mock created I0318 12:55:50.107366 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-c6d9x" generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mock ClusterIP 10.0.0.54 99/TCP 0s NAME DESIRED CURRENT READY AGE replicationcontroller/mock 1 1 0 0s Name: mock Namespace: namespace-1679144147-2276 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.54 IPs: 10.0.0.54 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock Namespace: namespace-1679144147-2276 Selector: app=mock Labels: app=mock Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: registry.k8s.io/pause:3.9 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-c6d9x service "mock" deleted replicationcontroller "mock" deleted I0318 12:55:50.690450 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.88] service/mock replaced replicationcontroller/mock replaced I0318 12:55:50.717354 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-xmbpj" generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bservice/mock edited replicationcontroller/mock edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bservice/mock labeled replicationcontroller/mock labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bservice/mock annotate replicationcontroller/mock annotate generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted replicationcontroller "mock" deleted Testing with file hack/testdata/multi-resource-json.json and replace with file hack/testdata/multi-resource-json-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:52.134769 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.220] service/mock created replicationcontroller/mock created I0318 12:55:52.189299 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-6bbnp" generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mock ClusterIP 10.0.0.220 99/TCP 0s NAME DESIRED CURRENT READY AGE replicationcontroller/mock 1 1 0 0s Name: mock Namespace: namespace-1679144147-2276 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.220 IPs: 10.0.0.220 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock Namespace: namespace-1679144147-2276 Selector: app=mock Labels: app=mock Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: registry.k8s.io/pause:3.9 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-6bbnp service "mock" deleted replicationcontroller "mock" deleted I0318 12:55:52.827724 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.226] service/mock replaced replicationcontroller/mock replaced I0318 12:55:52.886899 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-4rh9v" generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bservice/mock edited replicationcontroller/mock edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bservice/mock labeled replicationcontroller/mock labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bservice/mock annotate replicationcontroller/mock annotate generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted replicationcontroller "mock" deleted Testing with file hack/testdata/multi-resource-rclist.json and replace with file hack/testdata/multi-resource-rclist-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/mock created I0318 12:55:54.321541 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-jnk7s" replicationcontroller/mock2 created I0318 12:55:54.343853 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock2" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock2-z8zqx" generic-resources.sh:78: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2: (BNAME DESIRED CURRENT READY AGE mock 1 1 0 0s mock2 1 1 0 0s Name: mock Namespace: namespace-1679144147-2276 Selector: app=mock Labels: app=mock status=replaced Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: registry.k8s.io/pause:3.9 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-jnk7s Name: mock2 Namespace: namespace-1679144147-2276 Selector: app=mock2 Labels: app=mock2 status=replaced Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock2 Containers: mock-container: Image: registry.k8s.io/pause:3.9 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock2-z8zqx replicationcontroller "mock" deleted replicationcontroller "mock2" deleted replicationcontroller/mock replaced replicationcontroller/mock2 replaced I0318 12:55:54.977240 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-kmlfd" I0318 12:55:54.993695 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock2" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock2-bhtgw" generic-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:104: Successful get rc mock2 {{.metadata.labels.status}}: replaced (Breplicationcontroller/mock edited replicationcontroller/mock2 edited generic-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (BW0318 12:55:55.452728 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:55:55.452779 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:122: Successful get rc mock2 {{.metadata.labels.status}}: edited (Breplicationcontroller/mock labeled replicationcontroller/mock2 labeled generic-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:142: Successful get rc mock2 {{.metadata.labels.labeled}}: true (Breplicationcontroller/mock annotate replicationcontroller/mock2 annotate generic-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:161: Successful get rc mock2 {{.metadata.annotations.annotated}}: true (Breplicationcontroller "mock" deleted replicationcontroller "mock2" deleted Testing with file hack/testdata/multi-resource-svclist.json and replace with file hack/testdata/multi-resource-svclist-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:56.387249 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.207] service/mock created I0318 12:55:56.430307 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock2" clusterIPs=map[IPv4:10.0.0.149] service/mock2 created generic-resources.sh:70: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mock ClusterIP 10.0.0.207 99/TCP 0s mock2 ClusterIP 10.0.0.149 99/TCP 0s Name: mock Namespace: namespace-1679144147-2276 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.207 IPs: 10.0.0.207 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock2 Namespace: namespace-1679144147-2276 Labels: app=mock2 Annotations: Selector: app=mock2 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.149 IPs: 10.0.0.149 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: service "mock" deleted service "mock2" deleted I0318 12:55:56.992612 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.113] service/mock replaced I0318 12:55:57.021735 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock2" clusterIPs=map[IPv4:10.0.0.73] service/mock2 replaced generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:98: Successful get services mock2 {{.metadata.labels.status}}: replaced (BI0318 12:55:57.189783 23056 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1679144134-15522/frontend" service/mock edited service/mock2 edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:116: Successful get services mock2 {{.metadata.labels.status}}: edited (Bservice/mock labeled service/mock2 labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:136: Successful get services mock2 {{.metadata.labels.labeled}}: true (Bservice/mock annotate service/mock2 annotate generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:155: Successful get services mock2 {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted service "mock2" deleted generic-resources.sh:173: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:174: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0318 12:55:58.698137 19996 alloc.go:330] "allocated clusterIPs" service="namespace-1679144147-2276/mock" clusterIPs=map[IPv4:10.0.0.77] service/mock created replicationcontroller/mock created I0318 12:55:58.755407 23056 event.go:307] "Event occurred" object="namespace-1679144147-2276/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-7fnbz" generic-resources.sh:180: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:181: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bservice "mock" deleted replicationcontroller "mock" deleted generic-resources.sh:187: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:188: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_persistent_volumes_tests Running command: run_persistent_volumes_tests +++ Running case: test-cmd.run_persistent_volumes_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_persistent_volumes_tests +++ [0318 12:55:59] Creating namespace namespace-1679144159-8712 namespace/namespace-1679144159-8712 created Context "test" modified. +++ [0318 12:55:59] Testing persistent volumes storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolume/pv0001 created storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001: (Bpersistentvolume "pv0001" deleted persistentvolume/pv0002 created storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002: (Bpersistentvolume "pv0002" deleted persistentvolume/pv0003 created storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003: (Bquery for persistentvolumes had limit param query for events had limit param query for persistentvolumes had user-specified limit param Successful describe persistentvolumes verbose logs: I0318 12:56:00.752935 54595 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:00.757958 54595 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:56:00.763342 54595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 1 milliseconds I0318 12:56:00.766427 54595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds I0318 12:56:00.776123 54595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dpv0003%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3D2c5b7408-5f55-4192-957f-35ae54bd2984&limit=500 200 OK in 8 milliseconds (Bpersistentvolume "pv0003" deleted storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolume/pv0001 created storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001: (BSuccessful (Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace persistentvolume "pv0001" deleted has:Warning: deleting cluster-scoped resources Successful (Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace persistentvolume "pv0001" deleted has:persistentvolume "pv0001" deleted storage.sh:51: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_persistent_volume_claims_tests Running command: run_persistent_volume_claims_tests +++ Running case: test-cmd.run_persistent_volume_claims_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_persistent_volume_claims_tests +++ [0318 12:56:01] Creating namespace namespace-1679144161-1127 namespace/namespace-1679144161-1127 created Context "test" modified. +++ [0318 12:56:01] Testing persistent volumes claims storage.sh:66: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolumeclaim/myclaim-1 created I0318 12:56:02.117465 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0318 12:56:02.139724 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:69: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-1: (Bquery for persistentvolumeclaims had limit param query for pods had limit param query for events had limit param query for persistentvolumeclaims had user-specified limit param Successful describe persistentvolumeclaims verbose logs: I0318 12:56:02.258411 54833 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:02.263371 54833 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:56:02.269369 54833 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144161-1127/persistentvolumeclaims?limit=500 200 OK in 1 milliseconds I0318 12:56:02.271526 54833 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144161-1127/persistentvolumeclaims/myclaim-1 200 OK in 1 milliseconds I0318 12:56:02.273211 54833 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144161-1127/pods?limit=500 200 OK in 1 milliseconds I0318 12:56:02.276062 54833 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144161-1127/events?fieldSelector=involvedObject.kind%3DPersistentVolumeClaim%2CinvolvedObject.uid%3Dc2aa4190-b217-4890-aebd-af78a9e8483c%2CinvolvedObject.name%3Dmyclaim-1%2CinvolvedObject.namespace%3Dnamespace-1679144161-1127&limit=500 200 OK in 1 milliseconds (Bpersistentvolumeclaim "myclaim-1" deleted I0318 12:56:02.414680 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" persistentvolumeclaim/myclaim-2 created I0318 12:56:02.741385 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0318 12:56:02.758030 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:75: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-2: (Bpersistentvolumeclaim "myclaim-2" deleted I0318 12:56:02.880727 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0318 12:56:03.208053 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" persistentvolumeclaim/myclaim-3 created I0318 12:56:03.228969 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:79: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-3: (Bpersistentvolumeclaim "myclaim-3" deleted I0318 12:56:03.361975 23056 event.go:307] "Event occurred" object="namespace-1679144161-1127/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:82: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_storage_class_tests Running command: run_storage_class_tests +++ Running case: test-cmd.run_storage_class_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_storage_class_tests +++ [0318 12:56:03] Testing storage class storage.sh:96: Successful get storageclass {{range.items}}{{.metadata.name}}:{{end}}: (Bstorageclass.storage.k8s.io/storage-class-name created storage.sh:112: Successful get storageclass {{range.items}}{{.metadata.name}}:{{end}}: storage-class-name: (Bstorage.sh:113: Successful get sc {{range.items}}{{.metadata.name}}:{{end}}: storage-class-name: (Bquery for storageclasses had limit param query for events had limit param query for storageclasses had user-specified limit param Successful describe storageclasses verbose logs: I0318 12:56:04.027019 55081 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:04.031958 55081 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:56:04.040904 55081 round_trippers.go:553] GET https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses?limit=500 200 OK in 4 milliseconds I0318 12:56:04.043094 55081 round_trippers.go:553] GET https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses/storage-class-name 200 OK in 1 milliseconds I0318 12:56:04.052747 55081 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dstorage-class-name%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DStorageClass%2CinvolvedObject.uid%3D26e701a2-32a8-48a5-9a79-5cb486df292b&limit=500 200 OK in 9 milliseconds (Bstorageclass.storage.k8s.io "storage-class-name" deleted storage.sh:118: Successful get storageclass {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_nodes_tests Running command: run_nodes_tests +++ Running case: test-cmd.run_nodes_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_nodes_tests +++ [0318 12:56:04] Testing kubectl(v1:nodes) core.sh:1584: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bmatched Name: matched Labels: matched CreationTimestamp: matched Conditions: matched Addresses: matched Capacity: matched Pods: core.sh:1586: Successful describe nodes 127.0.0.1: Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m50s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (Bcore.sh:1588: Successful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m50s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (B core.sh:1590: Successful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) (B core.sh:1592: Successful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m50s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (B matched Name: matched Labels: matched CreationTimestamp: matched Conditions: matched Addresses: matched Capacity: matched Pods: Successful describe nodes: Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m50s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (BSuccessful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m50s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (BSuccessful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) (BSuccessful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Sat, 18 Mar 2023 12:50:09 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Sat, 18 Mar 2023 12:50:09 +0000 Sat, 18 Mar 2023 12:51:09 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m51s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (Bquery for nodes had limit param query for pods had limit param query for events had limit param query for nodes had user-specified limit param Successful describe nodes verbose logs: I0318 12:56:05.150725 55331 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:05.155745 55331 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:56:05.161210 55331 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?limit=500 200 OK in 1 milliseconds I0318 12:56:05.164467 55331 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes/127.0.0.1 200 OK in 1 milliseconds I0318 12:56:05.166322 55331 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500 200 OK in 1 milliseconds I0318 12:56:05.177168 55331 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3D127.0.0.1%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DNode%2CinvolvedObject.uid%3D21c18140-f05c-49c7-aa1d-d0d6a7267a65&limit=500 200 OK in 9 milliseconds I0318 12:56:05.186475 55331 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.uid%3D127.0.0.1%2CinvolvedObject.name%3D127.0.0.1%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DNode&limit=500 200 OK in 9 milliseconds I0318 12:56:05.187771 55331 round_trippers.go:553] GET https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/127.0.0.1 404 Not Found in 1 milliseconds (Bcore.sh:1606: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 patched core.sh:1609: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: true (Bnode/127.0.0.1 patched core.sh:1612: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Btokenreview.authentication.k8s.io/ created +++ exit code: 0 Recording: run_exec_credentials_tests Running command: run_exec_credentials_tests +++ Running case: test-cmd.run_exec_credentials_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_exec_credentials_tests +++ [0318 12:56:05] Testing kubectl with configured client.authentication.k8s.io/v1beta1 exec credentials plugin +++ [0318 12:56:05] exec credential plugin not triggered since kubectl was called with provided --token +++ [0318 12:56:05] exec credential plugin triggered since kubectl was called without provided --token +++ [0318 12:56:05] exec credential plugin triggered and provided valid credentials +++ [0318 12:56:05] exec credential plugin not triggered since kubectl was called with provided --username/--password certificatesigningrequest.certificates.k8s.io/testuser created authentication.sh:152: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/testuser approved authentication.sh:154: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: Approved (Bauthentication.sh:156: Successful get csr/testuser {{.status.certificate}}: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMxakNDQWI2Z0F3SUJBZ0lRUVF0VnpCbjdtQ296UkdVbTZTQWhLREFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFEREFreE1qY3VNQzR3TGpFd0hoY05Nak13TXpFNE1USTFNVEEyV2hjTk1qUXdNekUzTVRJMQpNVEEyV2pBVE1SRXdEd1lEVlFRREV3aDBaWE4wZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFMUXhaWE1YZEFMM1oxQkxNRCswR1ZyZWlSMDJSSTNKMzVKdWZNVTNFZ2FhZWIvbTlLRFYKUFp2ekxIcXpRaGN2TlFyYVZwajF3SFhhZEwzcWJkaFg3dlZnVldkZUVhbUJ6RlNWQ0J5di9LU1p3eVhuamtkZwpGR3psM2FiKy90eDlpT3EyYTY4WEFSNUI1WElqY09sWXNTN2NNVUpxelZPa0FHY09DQTV3c043VmdTTUdDam1FCjJXaU9jcTQ2b3JNbDhoYU9OdUgvQjYrL0ZFeG43bDkyTGlLVnBLSGNuWGVTYSthK3VFUGFlZnBhcFIyT0ZLd0wKeWc1T2FmTnViVkUxdW1BN3JEdElKdE9uOFRGdmRnaStyVlBxYk9LajgzUTNoVkNCTGE1RDdyb1JHSlVIV0lZZgo0bm1OS21GeWJ5eGdDM0JkREozNERkM3JYQ2pUd0ZwN01GVUNBd0VBQWFNbE1DTXdFd1lEVlIwbEJBd3dDZ1lJCkt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVDNMYWF1S1cKQU5yL09ETDlQQ0dlL3FTTnBCaTFKSGUrbDZHZWVjcVVYdjNyNjdwQlVpREgzWDNTUTVIMHBQaURDNmNkUVI4aworSkRhM0E0M2dDSWc3Vld3K3U2cUcvNHNKT1JDa3VsRlU5TDFJWGpkQWl6L2JBN1N1ekowLzgxSVhUaGhzNFdCCmhic2VkYkYwdldSTmNqanJrb0N5QUo3WVJ2bm56RjVYZE01ZVRTUEJuUWJQMlkyZWJscHNVemExdDV4ZUxVeEsKY2cwYkM3YUlCREZYL0Z5bEdYVm11azRpSkdlZlVkdlJESHZra1NGaEp0aVRFbDJuUDkwL1RMTGVPZEN6eHQwSQpMU1NSVWNoaXhIOVFLNEdYTno1emdtV25MaTZDK29DWjFnUXF2ZjRhcUJ5cXgzcXZYWitRSUxTNHNQU2sxRnJaCjdyKzJ4ZGwxWW53RE1RPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= (B+++ [0318 12:56:06] exec credential plugin not triggered since kubectl was called with provided --client-certificate/--client-key User "testuser" set. +++ [0318 12:56:06] exec credential plugin not triggered since kubeconfig was configured with --client-certificate/--client-key for authentication certificatesigningrequest.certificates.k8s.io "testuser" deleted +++ [0318 12:56:06] Testing kubectl with configured client.authentication.k8s.io/v1 exec credentials plugin +++ [0318 12:56:06] exec credential plugin not triggered since kubectl was called with provided --token +++ [0318 12:56:06] exec credential plugin triggered since kubectl was called without provided --token +++ [0318 12:56:06] exec credential plugin triggered and provided valid credentials +++ [0318 12:56:06] exec credential plugin not triggered since kubectl was called with provided --username/--password certificatesigningrequest.certificates.k8s.io/testuser created authentication.sh:152: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/testuser approved authentication.sh:154: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: Approved (Bauthentication.sh:156: Successful get csr/testuser {{.status.certificate}}: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMxekNDQWIrZ0F3SUJBZ0lSQUkyMkV3cmdMVW10TkdJZGNlM2ZMMVl3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXd3Sk1USTNMakF1TUM0eE1CNFhEVEl6TURNeE9ERXlOVEV3TjFvWERUSTBNRE14TnpFeQpOVEV3TjFvd0V6RVJNQThHQTFVRUF4TUlkR1Z6ZEhWelpYSXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCCkR3QXdnZ0VLQW9JQkFRQzBNV1Z6RjNRQzkyZFFTekEvdEJsYTNva2ROa1NOeWQrU2JuekZOeElHbW5tLzV2U2cKMVQyYjh5eDZzMElYTHpVSzJsYVk5Y0IxMm5TOTZtM1lWKzcxWUZWblhoR3BnY3hVbFFnY3IveWttY01sNTQ1SApZQlJzNWQybS92N2NmWWpxdG11dkZ3RWVRZVZ5STNEcFdMRXUzREZDYXMxVHBBQm5EZ2dPY0xEZTFZRWpCZ281CmhObG9qbkt1T3FLekpmSVdqamJoL3dldnZ4Uk1aKzVmZGk0aWxhU2gzSjEza212bXZyaEQybm42V3FVZGpoU3MKQzhvT1RtbnpibTFSTmJwZ082dzdTQ2JUcC9FeGIzWUl2cTFUNm16aW8vTjBONFZRZ1MydVErNjZFUmlWQjFpRwpIK0o1alNwaGNtOHNZQXR3WFF5ZCtBM2Q2MXdvMDhCYWV6QlZBZ01CQUFHakpUQWpNQk1HQTFVZEpRUU1NQW9HCkNDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdRMWVQRHIKdlBCa3hOR0ZvRzF6NzdXejQxTDR5ZW5EcUh4MG9TY2o1cUowaHpyemo4anIyeWdWSm5IQ0RkNk5KSkgvY1dhbQptci9WYTZubGpKTDlqSTFXNmR2OUZjNmdiR0xGSGEwMndqd0orMTFkZ1JvK1FwWllPYTFtZkFvUk5LcnlNUms1CnBqb0d6d0dHdVQ2YjhhTFFVNW91R2x2eHFnclN0eGdDRjd0ZUtBdkp4UU8wZHhWanBsU3puM2M1UU9SMkhyOUIKZERhaWcvcEJlUmx0K2hmc240aTN5K2ZIR1JUdlErV2NHWkViY0ZyZUk0U3dDd0FjLy9RN25iMkpXbTNxQzA0MApaelRkL1hPK1pzdGVsc25OcmdBd3lONlhkVDFnUEZCQmIxYXB5V3dwakI4bUFUY1Y1V3pKY0gzY2xlZGlPSkErCnFGVlBwQVlVeFdYOWZqTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= (B+++ [0318 12:56:07] exec credential plugin not triggered since kubectl was called with provided --client-certificate/--client-key User "testuser" set. +++ [0318 12:56:07] exec credential plugin not triggered since kubeconfig was configured with --client-certificate/--client-key for authentication certificatesigningrequest.certificates.k8s.io "testuser" deleted +++ exit code: 0 Recording: run_exec_credentials_interactive_tests Running command: run_exec_credentials_interactive_tests +++ Running case: test-cmd.run_exec_credentials_interactive_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_exec_credentials_interactive_tests +++ [0318 12:56:07] Testing kubectl with configured client.authentication.k8s.io/v1beta1 interactive exec credentials plugin +++ [0318 12:56:07] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'replace -f - --force') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0318 12:56:08] exec credential plugin not run because kubectl already uses standard input +++ [0318 12:56:08] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'apply -f -') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0318 12:56:08] exec credential plugin not run because kubectl already uses standard input +++ [0318 12:56:08] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'set env deployment/some-deployment -') with input 'SOME_ENV_VAR_KEY=SOME_ENV_VAR_VAL' W0318 12:56:08.603694 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:56:08.603732 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource +++ [0318 12:56:08] exec credential plugin not run because kubectl already uses standard input +++ [0318 12:56:08] client.authentication.k8s.io/v1beta1 exec credential plugin triggered and provided valid credentials +++ [0318 12:56:08] Testing kubectl with configured client.authentication.k8s.io/v1 interactive exec credentials plugin +++ [0318 12:56:08] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'replace -f - --force') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0318 12:56:09] exec credential plugin not run because kubectl already uses standard input +++ [0318 12:56:09] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'apply -f -') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0318 12:56:09] exec credential plugin not run because kubectl already uses standard input +++ [0318 12:56:09] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'set env deployment/some-deployment -') with input 'SOME_ENV_VAR_KEY=SOME_ENV_VAR_VAL' +++ [0318 12:56:09] exec credential plugin not run because kubectl already uses standard input +++ [0318 12:56:09] kubeconfig was not loaded successfully because client.authentication.k8s.io/v1 exec credential plugin is missing interactiveMode +++ exit code: 0 Recording: run_authorization_tests Running command: run_authorization_tests +++ Running case: test-cmd.run_authorization_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_authorization_tests +++ [0318 12:56:09] Testing authorization subjectaccessreview.authorization.k8s.io/ created +++ [0318 12:56:09] "authorization.k8s.io/subjectaccessreviews" returns as expected: { "kind": "SubjectAccessReview", "apiVersion": "authorization.k8s.io/v1", "metadata": { "creationTimestamp": null, "managedFields": [ { "manager": "curl", "operation": "Update", "apiVersion": "authorization.k8s.io/v1", "time": "2023-03-18T12:56:09Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:groups": {}, "f:resourceAttributes": { ".": {}, "f:group": {}, "f:namespace": {}, "f:resource": {}, "f:verb": {} }, "f:user": {} } } } ] }, "spec": { "resourceAttributes": { "namespace": "ns", "verb": "create", "group": "autoscaling", "resource": "horizontalpodautoscalers" }, "user": "bob", "groups": [ "the-group" ] }, "status": { "allowed": true, "reason": "RBAC: allowed by ClusterRoleBinding \"super-group\" of ClusterRole \"admin\" to Group \"the-group\"" } } +++ exit code: 0 Successful (Bmessage:yes has:yes Successful (Bmessage:yes has:yes Successful (Bmessage:Warning: the server doesn't have a resource type 'invalid_resource' yes has:the server doesn't have a resource type Successful (Bmessage:yes has:yes Successful (Bmessage:error: --subresource can not be used with NonResourceURL has:subresource can not be used with NonResourceURL Successful (BSuccessful (Bmessage:yes 0 has:0 Successful (Bmessage:0 has:0 Successful (Bmessage:yes has not:Warning Successful (Bmessage:Warning: the server doesn't have a resource type 'foo' yes has:Warning: the server doesn't have a resource type 'foo' Successful (Bmessage:Warning: the server doesn't have a resource type 'foo' yes has not:Warning: resource 'foo' is not namespace scoped Successful (Bmessage:yes has not:Warning Successful (Bmessage:Warning: resource 'nodes' is not namespace scoped yes has:Warning: resource 'nodes' is not namespace scoped Successful (Bmessage:yes has not:Warning: resource 'nodes' is not namespace scoped clusterrole.rbac.authorization.k8s.io/testing-CR reconciled (dry run) reconciliation required create missing rules added: {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]} clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled (dry run) reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} rolebinding.rbac.authorization.k8s.io/testing-RB reconciled (dry run) reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} role.rbac.authorization.k8s.io/testing-R reconciled (dry run) reconciliation required create missing rules added: {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]} legacy-script.sh:880: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Blegacy-script.sh:881: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Blegacy-script.sh:882: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Blegacy-script.sh:883: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Bclusterrole.rbac.authorization.k8s.io/testing-CR reconciled reconciliation required create missing rules added: {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]} clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} rolebinding.rbac.authorization.k8s.io/testing-RB reconciled reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} role.rbac.authorization.k8s.io/testing-R reconciled reconciliation required create missing rules added: {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]} legacy-script.sh:887: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB: (Blegacy-script.sh:888: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R: (Blegacy-script.sh:889: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB: (Blegacy-script.sh:890: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR: (BSuccessful (Bmessage:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole has:only rbac.authorization.k8s.io/v1 is supported rolebinding.rbac.authorization.k8s.io "testing-RB" deleted role.rbac.authorization.k8s.io "testing-R" deleted Warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "testing-CR" deleted clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted Recording: run_retrieve_multiple_tests Running command: run_retrieve_multiple_tests +++ Running case: test-cmd.run_retrieve_multiple_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_retrieve_multiple_tests Context "test" modified. +++ [0318 12:56:11] Testing kubectl(v1:multiget) get.sh:250: Successful get nodes/127.0.0.1 service/kubernetes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:kubernetes: (B+++ exit code: 0 Recording: run_resource_aliasing_tests Running command: run_resource_aliasing_tests +++ Running case: test-cmd.run_resource_aliasing_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_resource_aliasing_tests +++ [0318 12:56:11] Creating namespace namespace-1679144171-4757 namespace/namespace-1679144171-4757 created Context "test" modified. +++ [0318 12:56:12] Testing resource aliasing replicationcontroller/cassandra created I0318 12:56:12.246810 23056 event.go:307] "Event occurred" object="namespace-1679144171-4757/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-c4z6v" I0318 12:56:12.264406 23056 event.go:307] "Event occurred" object="namespace-1679144171-4757/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-6pjsn" service/cassandra created discovery.sh:236: Successful get all -l app=cassandra {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra: (BI0318 12:56:12.676291 23056 event.go:307] "Event occurred" object="namespace-1679144171-4757/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-9klfp" pod "cassandra-6pjsn" deleted I0318 12:56:12.717694 23056 event.go:307] "Event occurred" object="namespace-1679144171-4757/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-f6khv" pod "cassandra-c4z6v" deleted replicationcontroller "cassandra" deleted service "cassandra" deleted +++ exit code: 0 Recording: run_kubectl_explain_tests Running command: run_kubectl_explain_tests +++ Running case: test-cmd.run_kubectl_explain_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_explain_tests +++ [0318 12:56:12] Testing kubectl(v1:explain) KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status KIND: Pod VERSION: v1 FIELD: message DESCRIPTION: A human readable message indicating details about why the pod is in this condition. GROUP: batch KIND: CronJob VERSION: v1 DESCRIPTION: CronJob represents the configuration of a single cron job. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status +++ exit code: 0 Recording: run_crd_deletion_recreation_tests Running command: run_crd_deletion_recreation_tests +++ Running case: test-cmd.run_crd_deletion_recreation_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_crd_deletion_recreation_tests +++ [0318 12:56:13] Creating namespace namespace-1679144173-30650 namespace/namespace-1679144173-30650 created Context "test" modified. +++ [0318 12:56:13] Testing resource creation, deletion, and re-creation Successful (Bmessage:customresourcedefinition.apiextensions.k8s.io/examples.test.com created has:created I0318 12:56:13.990183 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager Successful (Bmessage:example.test.com/test created has:created I0318 12:56:16.253707 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager I0318 12:56:16.264702 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager Successful (Bmessage:customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted has:deleted NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta3 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta3 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment Successful (Bmessage:customresourcedefinition.apiextensions.k8s.io/examples.test.com created has:created I0318 12:56:17.026619 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager Successful (Bmessage:example.test.com/test created has:created example.test.com "test" deleted I0318 12:56:19.349934 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted I0318 12:56:19.361699 19996 handler.go:165] Adding GroupVersion test.com v1 to ResourceManager +++ exit code: 0 Recording: run_swagger_tests Running command: run_swagger_tests +++ Running case: test-cmd.run_swagger_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_swagger_tests +++ [0318 12:56:19] Testing swagger +++ exit code: 0 Recording: run_kubectl_sort_by_tests Running command: run_kubectl_sort_by_tests +++ Running case: test-cmd.run_kubectl_sort_by_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_sort_by_tests +++ [0318 12:56:19] Testing kubectl --sort-by get.sh:306: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BNo resources found in namespace-1679144173-30650 namespace. No resources found in namespace-1679144173-30650 namespace. get.sh:314: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created get.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:valid-pod Successful (Bmessage:I0318 12:56:20.656302 57255 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:20.660715 57255 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144173-30650/pods?includeObject=Object I0318 12:56:20.660734 57255 round_trippers.go:469] Request Headers: I0318 12:56:20.660743 57255 round_trippers.go:473] User-Agent: kubectl/v1.27.0 (linux/amd64) kubernetes/7a1ef20 I0318 12:56:20.660750 57255 round_trippers.go:473] Authorization: Bearer I0318 12:56:20.660756 57255 round_trippers.go:473] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I0318 12:56:20.666245 57255 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds I0318 12:56:20.666270 57255 round_trippers.go:577] Response Headers: I0318 12:56:20.666288 57255 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 1f32ad4f-c846-46fa-845c-e96074ea1f78 I0318 12:56:20.666299 57255 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 565f05eb-6409-4008-b261-77348362a846 I0318 12:56:20.666310 57255 round_trippers.go:580] Date: Sat, 18 Mar 2023 12:56:20 GMT I0318 12:56:20.666320 57255 round_trippers.go:580] Audit-Id: cb272814-dd9f-47d6-9d02-9491471fb1e1 I0318 12:56:20.666330 57255 round_trippers.go:580] Cache-Control: no-cache, private I0318 12:56:20.666340 57255 round_trippers.go:580] Content-Type: application/json I0318 12:56:20.666426 57255 request.go:1188] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"3766"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{"n [truncated 3547 chars] NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:as=Table Successful (Bmessage:I0318 12:56:20.656302 57255 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:20.660715 57255 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144173-30650/pods?includeObject=Object I0318 12:56:20.660734 57255 round_trippers.go:469] Request Headers: I0318 12:56:20.660743 57255 round_trippers.go:473] User-Agent: kubectl/v1.27.0 (linux/amd64) kubernetes/7a1ef20 I0318 12:56:20.660750 57255 round_trippers.go:473] Authorization: Bearer I0318 12:56:20.660756 57255 round_trippers.go:473] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I0318 12:56:20.666245 57255 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds I0318 12:56:20.666270 57255 round_trippers.go:577] Response Headers: I0318 12:56:20.666288 57255 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 1f32ad4f-c846-46fa-845c-e96074ea1f78 I0318 12:56:20.666299 57255 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 565f05eb-6409-4008-b261-77348362a846 I0318 12:56:20.666310 57255 round_trippers.go:580] Date: Sat, 18 Mar 2023 12:56:20 GMT I0318 12:56:20.666320 57255 round_trippers.go:580] Audit-Id: cb272814-dd9f-47d6-9d02-9491471fb1e1 I0318 12:56:20.666330 57255 round_trippers.go:580] Cache-Control: no-cache, private I0318 12:56:20.666340 57255 round_trippers.go:580] Content-Type: application/json I0318 12:56:20.666426 57255 request.go:1188] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"3766"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{"n [truncated 3547 chars] NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:includeObject=Object get.sh:329: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted get.sh:333: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/sorted-pod1 created get.sh:342: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1: (Bpod/sorted-pod2 created get.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2: (Bpod/sorted-pod3 created get.sh:350: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2:sorted-pod3: (BSuccessful (Bmessage:sorted-pod1:sorted-pod2:sorted-pod3: has:sorted-pod1:sorted-pod2:sorted-pod3: Successful (Bmessage:sorted-pod3:sorted-pod2:sorted-pod1: has:sorted-pod3:sorted-pod2:sorted-pod1: Successful (Bmessage:sorted-pod2:sorted-pod1:sorted-pod3: has:sorted-pod2:sorted-pod1:sorted-pod3: Successful (Bmessage:sorted-pod1:sorted-pod2:sorted-pod3: has:sorted-pod1:sorted-pod2:sorted-pod3: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:I0318:NAME:sorted-pod2:sorted-pod1:sorted-pod3: has:sorted-pod2:sorted-pod1:sorted-pod3: Successful (Bmessage:I0318 12:56:22.807372 57534 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:22.811199 57534 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1679144173-30650/pods I0318 12:56:22.811400 57534 round_trippers.go:469] Request Headers: I0318 12:56:22.811410 57534 round_trippers.go:473] User-Agent: kubectl/v1.27.0 (linux/amd64) kubernetes/7a1ef20 I0318 12:56:22.811419 57534 round_trippers.go:473] Authorization: Bearer I0318 12:56:22.811433 57534 round_trippers.go:473] Accept: application/json I0318 12:56:22.817171 57534 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds I0318 12:56:22.817191 57534 round_trippers.go:577] Response Headers: I0318 12:56:22.817199 57534 round_trippers.go:580] Audit-Id: f42f9248-d9ca-45d7-8ec3-2b6fa77439df I0318 12:56:22.817210 57534 round_trippers.go:580] Cache-Control: no-cache, private I0318 12:56:22.817220 57534 round_trippers.go:580] Content-Type: application/json I0318 12:56:22.817243 57534 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 1f32ad4f-c846-46fa-845c-e96074ea1f78 I0318 12:56:22.817250 57534 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 565f05eb-6409-4008-b261-77348362a846 I0318 12:56:22.817257 57534 round_trippers.go:580] Date: Sat, 18 Mar 2023 12:56:22 GMT I0318 12:56:22.817360 57534 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3772"},"items":[{"metadata":{"name":"sorted-pod1","namespace":"namespace-1679144173-30650","uid":"9bf115f8-afe7-4624-bd4e-4a8fa9c0d854","resourceVersion":"3769","creationTimestamp":"2023-03-18T12:56:21Z","labels":{"name":"sorted-pod3-label"},"managedFields":[{"manager":"kubectl-create","operation":"Update","apiVersion":"v1","time":"2023-03-18T12:56:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-pause2\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"containers":[{"name":"kubernetes-pause2","image":"registry.k8 [truncated 3224 chars] NAME AGE sorted-pod2 1s sorted-pod1 1s sorted-pod3 0s has not:Table get.sh:391: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2:sorted-pod3: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "sorted-pod1" force deleted pod "sorted-pod2" force deleted pod "sorted-pod3" force deleted get.sh:395: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_kubectl_all_namespace_tests Running command: run_kubectl_all_namespace_tests +++ Running case: test-cmd.run_kubectl_all_namespace_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_all_namespace_tests +++ [0318 12:56:23] Testing kubectl --all-namespace get.sh:408: Successful get namespaces {{range.items}}{{if eq .metadata.name "default"}}{{.metadata.name}}:{{end}}{{end}}: default: (Bget.sh:412: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created get.sh:416: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BNAMESPACE NAME READY STATUS RESTARTS AGE namespace-1679144173-30650 valid-pod 0/1 Pending 0 0s W0318 12:56:23.678395 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:56:23.678436 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource namespace/all-ns-test-1 created serviceaccount/test created namespace/all-ns-test-2 created serviceaccount/test created Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 0s all-ns-test-1 test 0 0s all-ns-test-2 default 0 0s all-ns-test-2 test 0 0s default default 0 6m14s kube-node-lease default 0 116s kube-public default 0 6m14s kube-system default 0 6m14s namespace-1679144064-29468 default 0 119s namespace-1679144072-4326 default 0 111s namespace-1679144079-7668 default 0 104s namespace-1679144080-10414 default 0 103s namespace-1679144086-22328 default 0 97s namespace-1679144093-32331 default 0 90s namespace-1679144094-25199 default 0 89s namespace-1679144101-12024 default 0 81s namespace-1679144103-10756 default 0 80s namespace-1679144106-6445 default 0 77s namespace-1679144118-22967 default 0 65s namespace-1679144134-15522 default 0 49s namespace-1679144142-11099 default 0 41s namespace-1679144144-13056 default 0 39s namespace-1679144146-22494 default 0 37s namespace-1679144147-2276 default 0 36s namespace-1679144159-8712 default 0 24s namespace-1679144161-1127 default 0 22s namespace-1679144171-4757 default 0 12s namespace-1679144173-30650 default 0 10s some-other-random default 0 12s has:all-ns-test-1 Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 0s all-ns-test-1 test 0 0s all-ns-test-2 default 0 0s all-ns-test-2 test 0 0s default default 0 6m14s kube-node-lease default 0 116s kube-public default 0 6m14s kube-system default 0 6m14s namespace-1679144064-29468 default 0 119s namespace-1679144072-4326 default 0 111s namespace-1679144079-7668 default 0 104s namespace-1679144080-10414 default 0 103s namespace-1679144086-22328 default 0 97s namespace-1679144093-32331 default 0 90s namespace-1679144094-25199 default 0 89s namespace-1679144101-12024 default 0 81s namespace-1679144103-10756 default 0 80s namespace-1679144106-6445 default 0 77s namespace-1679144118-22967 default 0 65s namespace-1679144134-15522 default 0 49s namespace-1679144142-11099 default 0 41s namespace-1679144144-13056 default 0 39s namespace-1679144146-22494 default 0 37s namespace-1679144147-2276 default 0 36s namespace-1679144159-8712 default 0 24s namespace-1679144161-1127 default 0 22s namespace-1679144171-4757 default 0 12s namespace-1679144173-30650 default 0 10s some-other-random default 0 12s has:all-ns-test-2 Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 1s all-ns-test-1 test 0 1s all-ns-test-2 default 0 1s all-ns-test-2 test 0 1s default default 0 6m15s kube-node-lease default 0 117s kube-public default 0 6m15s kube-system default 0 6m15s namespace-1679144064-29468 default 0 2m namespace-1679144072-4326 default 0 112s namespace-1679144079-7668 default 0 105s namespace-1679144080-10414 default 0 104s namespace-1679144086-22328 default 0 98s namespace-1679144093-32331 default 0 91s namespace-1679144094-25199 default 0 90s namespace-1679144101-12024 default 0 82s namespace-1679144103-10756 default 0 81s namespace-1679144106-6445 default 0 78s namespace-1679144118-22967 default 0 66s namespace-1679144134-15522 default 0 50s namespace-1679144142-11099 default 0 42s namespace-1679144144-13056 default 0 40s namespace-1679144146-22494 default 0 38s namespace-1679144147-2276 default 0 37s namespace-1679144159-8712 default 0 25s namespace-1679144161-1127 default 0 23s namespace-1679144171-4757 default 0 13s namespace-1679144173-30650 default 0 11s some-other-random default 0 13s has:all-ns-test-1 Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 1s all-ns-test-1 test 0 1s all-ns-test-2 default 0 1s all-ns-test-2 test 0 1s default default 0 6m15s kube-node-lease default 0 117s kube-public default 0 6m15s kube-system default 0 6m15s namespace-1679144064-29468 default 0 2m namespace-1679144072-4326 default 0 112s namespace-1679144079-7668 default 0 105s namespace-1679144080-10414 default 0 104s namespace-1679144086-22328 default 0 98s namespace-1679144093-32331 default 0 91s namespace-1679144094-25199 default 0 90s namespace-1679144101-12024 default 0 82s namespace-1679144103-10756 default 0 81s namespace-1679144106-6445 default 0 78s namespace-1679144118-22967 default 0 66s namespace-1679144134-15522 default 0 50s namespace-1679144142-11099 default 0 42s namespace-1679144144-13056 default 0 40s namespace-1679144146-22494 default 0 38s namespace-1679144147-2276 default 0 37s namespace-1679144159-8712 default 0 25s namespace-1679144161-1127 default 0 23s namespace-1679144171-4757 default 0 13s namespace-1679144173-30650 default 0 11s some-other-random default 0 13s has:all-ns-test-2 namespace "all-ns-test-1" deleted namespace "all-ns-test-2" deleted W0318 12:56:31.590000 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:56:31.590046 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0318 12:56:34.254217 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="all-ns-test-1" get.sh:442: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted get.sh:446: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:450: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (BSuccessful (Bmessage:NAME STATUS ROLES AGE VERSION 127.0.0.1 NotReady 6m25s has not:NAMESPACE +++ exit code: 0 Recording: run_deprecated_api_tests Running command: run_deprecated_api_tests +++ Running case: test-cmd.run_deprecated_api_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_deprecated_api_tests +++ [0318 12:56:34] Testing deprecated APIs customresourcedefinition.apiextensions.k8s.io/deprecated.example.com created I0318 12:56:35.102259 19996 handler.go:165] Adding GroupVersion example.com v1 to ResourceManager I0318 12:56:35.102304 19996 handler.go:165] Adding GroupVersion example.com v1beta1 to ResourceManager Successful (Bmessage:deprecated.example.com has:deprecated.example.com Successful (Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind No resources found in namespace-1679144173-30650 namespace. has:example.com/v1beta1 DeprecatedKind is deprecated Successful (Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind No resources found in namespace-1679144173-30650 namespace. error: 1 warning received has:example.com/v1beta1 DeprecatedKind is deprecated Successful (Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind No resources found in namespace-1679144173-30650 namespace. error: 1 warning received has:error: 1 warning received I0318 12:56:35.323304 19996 handler.go:165] Adding GroupVersion example.com v1 to ResourceManager I0318 12:56:35.323356 19996 handler.go:165] Adding GroupVersion example.com v1beta1 to ResourceManager customresourcedefinition.apiextensions.k8s.io "deprecated.example.com" deleted I0318 12:56:35.335357 19996 handler.go:165] Adding GroupVersion example.com v1 to ResourceManager I0318 12:56:35.335398 19996 handler.go:165] Adding GroupVersion example.com v1beta1 to ResourceManager +++ exit code: 0 Recording: run_template_output_tests Running command: run_template_output_tests +++ Running case: test-cmd.run_template_output_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_template_output_tests +++ [0318 12:56:35] Testing --template support on commands +++ [0318 12:56:35] Creating namespace namespace-1679144195-12349 namespace/namespace-1679144195-12349 created Context "test" modified. template-output.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BW0318 12:56:36.061135 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:56:36.061187 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource pod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2023-03-18T12:56:36Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1679144195-12349", "resourceVersion": "3824", "uid": "1ae5dba6-7c01-4290-9431-2e971a320f57" }, "spec": { "containers": [ { "image": "registry.k8s.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } template-output.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:scale-1: has:scale-1: Successful (Bmessage:redis-slave: has:redis-slave: Successful (Bmessage:pi: has:pi: Successful (Bmessage:127.0.0.1: has:127.0.0.1: node/127.0.0.1 untainted replicationcontroller/cassandra created I0318 12:56:37.945439 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-2bvjp" I0318 12:56:37.966435 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-9lq78" Successful (Bmessage:cassandra: has:cassandra: reconciliation required create missing rules added: {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]} reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} reconciliation required create missing rules added: {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]} Successful (Bmessage:testing-CR:testing-CRB:testing-RB:testing-R: has:testing-CR:testing-CRB:testing-RB:testing-R: Successful (Bmessage:myclusterrole: has:myclusterrole: Successful (Bmessage:foo: has:foo: Successful (Bmessage:cm: has:cm: Successful (Bmessage:deploy: has:deploy: I0318 12:56:38.337463 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/deploy" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deploy-64f8dd7bfd to 1" I0318 12:56:38.353929 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/deploy-64f8dd7bfd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deploy-64f8dd7bfd-dgxjc" cronjob.batch/pi created Successful (Bmessage:foo: has:foo: Successful (Bmessage:bar: has:bar: Successful (Bmessage:foo: has:foo: Successful (Bmessage:myrole: has:myrole: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:kubernetes: has:kubernetes: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: I0318 12:56:39.454613 23056 namespace_controller.go:182] "Namespace has been deleted" namespace="all-ns-test-2" Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://127.0.0.1:6443 name: local - cluster: certificate-authority-data: DATA+OMITTED server: https://does-not-work name: test-cluster - cluster: certificate-authority: /tmp/apiserver.crt server: "" name: test-cluster-1 - cluster: certificate-authority-data: DATA+OMITTED server: "" name: test-cluster-2 - cluster: certificate-authority-data: DATA+OMITTED server: "" name: test-cluster-3 contexts: - context: cluster: local namespace: namespace-1679144195-12349 user: test-admin name: test current-context: test kind: Config preferences: {} users: - name: test-admin user: token: REDACTED - name: testuser user: client-certificate: /tmp/testuser.crt client-key: /home/prow/go/src/k8s.io/kubernetes/hack/testdata/auth/testuser.key exec: apiVersion: client.authentication.k8s.io/v1beta1 args: null command: /tmp/invalid_execcredential.sh env: null interactiveMode: IfAvailable provideClusterInfo: false - name: user1 user: client-certificate: /tmp/test-client-certificate.crt client-key: /tmp/test-client-key.crt - name: user2 user: client-certificate-data: DATA+OMITTED client-key-data: DATA+OMITTED - name: user3 user: client-certificate-data: DATA+OMITTED client-key-data: DATA+OMITTED has:kind: Config Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:Config: has:Config Successful (Bmessage:apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: cm has:kind: ConfigMap cronjob.batch "pi" deleted I0318 12:56:40.381518 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-vm5m2" pod "cassandra-2bvjp" deleted I0318 12:56:40.427380 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-fwlc8" pod "cassandra-9lq78" deleted pod "deploy-64f8dd7bfd-dgxjc" deleted I0318 12:56:40.460959 23056 event.go:307] "Event occurred" object="namespace-1679144195-12349/deploy-64f8dd7bfd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deploy-64f8dd7bfd-5fndq" pod "valid-pod" deleted replicationcontroller "cassandra" deleted clusterrole.rbac.authorization.k8s.io "myclusterrole" deleted clusterrolebinding.rbac.authorization.k8s.io "foo" deleted deployment.apps "deploy" deleted +++ exit code: 0 Recording: run_certificates_tests Running command: run_certificates_tests +++ Running case: test-cmd.run_certificates_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_certificates_tests +++ [0318 12:56:40] Testing certificates certificatesigningrequest.certificates.k8s.io/foo created certificate.sh:29: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo approved { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2023-03-18T12:56:41Z", "name": "foo", "resourceVersion": "3893", "uid": "bf3418cb-2b8b-4706-b935-1af96cf0abb7" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2RENDQWRDZ0F3SUJBZ0lRSi9PU2JnQ3BVQ29xOXMyL2REaTVBREFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFEREFreE1qY3VNQzR3TGpFd0hoY05Nak13TXpFNE1USTFNVFF4V2hjTk1qUXdNekUzTVRJMQpNVFF4V2pBVk1STXdFUVlEVlFRREV3cHJkV0psTFdGa2JXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUExSEsxZDJwN043VUM2cHg4bFZ0QUJ3OGpQcFZ5TllqckptSStUS1RUZENnV0dzVVQKRk1DdzR0NFEvS1FERGx2QlAxOXVQaGJmcDhhTHdPV1hCQ3hPUFp6bE0ybUFFalNVZ0tqYnlHQ1cvOHZhWGEyVgpnUW0zdEtaZHlkS2lGdklvZkVzTkErNTh3OEEwV1dFQjh3WUZjZEN0OHVQeVEwd3MvVHhFK1dXM3U3RVBsQzAvCmluSVg5SnFlWlpNcERrM05sSEV2L3BHRWpRbW9ldC9oQndHSHE5UEtlcGtONS9WNnJyU0FESjVJNFVrbHAyZjcKRzlNQ1AvelY4eEtmczBsS0NNb0pzSVBLM25MOU4zQzBycUJRUGZjeUtFMmZuRWt4QzNVVlpBOGJydkxUa0JmTwpnbU0yZVZnL25hdVUxZWp2ek9KTDd0RHdVaW9Mcml3MmhpR3JGd0lEQVFBQm96VXdNekFPQmdOVkhROEJBZjhFCkJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFOQmdrcWhraUcKOXcwQkFRc0ZBQU9DQVFFQWxTekJIcm1xbjJGSWRHL1JSSThOaFU1ckc4V2dvMVBUZ1l2blMyaFIvYXFLS1gwKwpnUUZiejM3UGxCTUc4aWRka2RadkJyZy9CcVNhMmVwU2pWRnhKZkZldFRVd3JWcDVSTHFhQnU4L1VFekE2bFpIClhmVlRFMnhJaTZ4U1J3WGJBZDRxeXRlV25EV0QvWGRIZndBc0wvUE9mZ2YrTTdXSm5pMjd4bkh5N1FjbEFna0IKMG1pbGhxSC9tNXc0VU5CWVJ1STRQVDBpWDluN3lZYml4dGZRYVF5N0Jobm41M0s0NFR1c1RZaHNTU3R2UTZpcQp6V3lrdUN4dGpFMEZvVlN4UWo5aG9DK2JKK1JCU1hYa0VWRnVuNndxaFNkRHNaYlRIWXoyR2RLSWJGVFdVNzhZCnkyaUFwYWZWcTVqTkhvdG9GVWVjUm8xbTNRcW5GSndQdTRLN0t3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "conditions": [ { "lastTransitionTime": "2023-03-18T12:56:41Z", "lastUpdateTime": "2023-03-18T12:56:41Z", "message": "This CSR was approved by kubectl certificate approve.", "reason": "KubectlApprove", "status": "True", "type": "Approved" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:32: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Approved (Bquery for certificatesigningrequests had limit param query for events had limit param query for certificatesigningrequests had user-specified limit param Successful describe certificatesigningrequests verbose logs: I0318 12:56:41.363137 58863 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:56:41.367503 58863 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:56:41.372696 58863 round_trippers.go:553] GET https://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500 200 OK in 1 milliseconds I0318 12:56:41.375470 58863 round_trippers.go:553] GET https://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests/foo 200 OK in 1 milliseconds I0318 12:56:41.384937 58863 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.uid%3Dbf3418cb-2b8b-4706-b935-1af96cf0abb7%2CinvolvedObject.name%3Dfoo%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCertificateSigningRequest&limit=500 200 OK in 8 milliseconds (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:36: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo created certificate.sh:39: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo approved { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2023-03-18T12:56:41Z", "name": "foo", "resourceVersion": "3897", "uid": "c24e7eca-d307-43e7-a951-b4d382cbec06" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2RENDQWRDZ0F3SUJBZ0lRZTdWK0ZpeHBiSlVVeEJZOU9EbUlPekFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFEREFreE1qY3VNQzR3TGpFd0hoY05Nak13TXpFNE1USTFNVFF4V2hjTk1qUXdNekUzTVRJMQpNVFF4V2pBVk1STXdFUVlEVlFRREV3cHJkV0psTFdGa2JXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUExSEsxZDJwN043VUM2cHg4bFZ0QUJ3OGpQcFZ5TllqckptSStUS1RUZENnV0dzVVQKRk1DdzR0NFEvS1FERGx2QlAxOXVQaGJmcDhhTHdPV1hCQ3hPUFp6bE0ybUFFalNVZ0tqYnlHQ1cvOHZhWGEyVgpnUW0zdEtaZHlkS2lGdklvZkVzTkErNTh3OEEwV1dFQjh3WUZjZEN0OHVQeVEwd3MvVHhFK1dXM3U3RVBsQzAvCmluSVg5SnFlWlpNcERrM05sSEV2L3BHRWpRbW9ldC9oQndHSHE5UEtlcGtONS9WNnJyU0FESjVJNFVrbHAyZjcKRzlNQ1AvelY4eEtmczBsS0NNb0pzSVBLM25MOU4zQzBycUJRUGZjeUtFMmZuRWt4QzNVVlpBOGJydkxUa0JmTwpnbU0yZVZnL25hdVUxZWp2ek9KTDd0RHdVaW9Mcml3MmhpR3JGd0lEQVFBQm96VXdNekFPQmdOVkhROEJBZjhFCkJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFOQmdrcWhraUcKOXcwQkFRc0ZBQU9DQVFFQU1SMVJrNEVQNERwWHRKcTFSU3RZMkw3cUNLVFVtbDFvZlVOSStIU0lyYWQ3MmxRNwozWXZsRklQUUdEazhKa0MzcFFUd1d4ZHc5d1BiYWhiYWJrWi9vT084Sy8vSmFwWE1wUXRGcWk4bGRNK2NKazZHCm1yUHNmalpNekd6aFB0VGQ4VDJvcjkxRHpJWk1aU3RMN25FbldNdmRMbHhFdFBMR05sQWtpT2JYRW5acWVyb1cKRmF4c2szWnhveUdGK2plOTN1K1hUdWNvR2l4L1EwUDQ2VHlXeGhoOTJHVkVsUEZBdFlQUmJIR3dwWnlRTndMRApwNlpLV25RYUZRekNjV2pVS3hhMEt4NlpXSzBKM3J6MU1odVhIM1hUalErTnFvcXFHRlpzV2pmL2hPT3VrZTY2Cmw2WFo3UEJhSXVmU2RVOWY4NHlueDVONlZwUGd0bXBJZXVVZXdBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "conditions": [ { "lastTransitionTime": "2023-03-18T12:56:41Z", "lastUpdateTime": "2023-03-18T12:56:41Z", "message": "This CSR was approved by kubectl certificate approve.", "reason": "KubectlApprove", "status": "True", "type": "Approved" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:42: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Approved (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:44: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo created certificate.sh:48: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo denied { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2023-03-18T12:56:42Z", "name": "foo", "resourceVersion": "3901", "uid": "b761e0a5-ec07-4780-85b8-34e5fb66b7b9" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "conditions": [ { "lastTransitionTime": "2023-03-18T12:56:42Z", "lastUpdateTime": "2023-03-18T12:56:42Z", "message": "This CSR was denied by kubectl certificate deny.", "reason": "KubectlDeny", "status": "True", "type": "Denied" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:51: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Denied (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:53: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo created certificate.sh:56: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo denied { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2023-03-18T12:56:42Z", "name": "foo", "resourceVersion": "3904", "uid": "234ec6fc-93b7-4e16-9432-21ebcc639b51" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "conditions": [ { "lastTransitionTime": "2023-03-18T12:56:43Z", "lastUpdateTime": "2023-03-18T12:56:43Z", "message": "This CSR was denied by kubectl certificate deny.", "reason": "KubectlDeny", "status": "True", "type": "Denied" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:59: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Denied (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:61: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (B+++ exit code: 0 Recording: run_cluster_management_tests Running command: run_cluster_management_tests +++ Running case: test-cmd.run_cluster_management_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_cluster_management_tests +++ [0318 12:56:43] Creating namespace namespace-1679144203-17371 namespace/namespace-1679144203-17371 created Context "test" modified. +++ [0318 12:56:43] Testing cluster-management commands node-management.sh:85: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bpod/test-pod-1 created pod/test-pod-2 created node-management.sh:91: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key "dedicated"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: (Bnode/127.0.0.1 tainted node/127.0.0.1 tainted node-management.sh:95: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key "dedicated"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: (Bnode/127.0.0.1 tainted node-management.sh:98: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key "dedicated"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: dedicated=foo:PreferNoSchedule (Bnode/127.0.0.1 untainted node/127.0.0.1 tainted node-management.sh:103: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key "dedicated"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: dedicated=:PreferNoSchedule (BSuccessful (Bmessage:kubectl-create kube-controller-manager kube-controller-manager kubectl-taint has:kubectl-taint node/127.0.0.1 untainted node/127.0.0.1 untainted node-management.sh:110: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key "dedicated"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: dedicated=:PreferNoSchedule (Bnode/127.0.0.1 untainted node-management.sh:114: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key "dedicated"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: (Bnode-management.sh:118: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 cordoned (dry run) node/127.0.0.1 cordoned (server dry run) node-management.sh:121: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:125: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 cordoned (dry run) Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) node/127.0.0.1 cordoned (server dry run) Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-2 (server dry run) evicting pod namespace-1679144203-17371/test-pod-1 (server dry run) node/127.0.0.1 drained (server dry run) node-management.sh:129: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bnode-management.sh:130: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:134: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (BW0318 12:56:45.697286 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:56:45.697321 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource node-management.sh:136: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2, (Bnode/127.0.0.1 cordoned (dry run) Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) node/127.0.0.1 drained (dry run) node/127.0.0.1 cordoned (server dry run) Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1 evicting pod namespace-1679144203-17371/test-pod-1 (server dry run) node/127.0.0.1 drained (server dry run) node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2, (BWarning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1 W0318 12:56:56.069223 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:56:56.069263 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:57:10.530181 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:57:10.530220 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:57:17.475910 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:57:17.475948 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:node/127.0.0.1 cordoned evicting pod namespace-1679144203-17371/test-pod-1 pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping node/127.0.0.1 drained has:evicting pod .*/test-pod-1 node-management.sh:145: Successful get pods/test-pod-2 {{.metadata.deletionTimestamp}}: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-1" force deleted Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-2" force deleted pod/test-pod-1 created pod/test-pod-2 created node/127.0.0.1 uncordoned node-management.sh:151: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:155: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (BSuccessful (Bmessage:node/127.0.0.1 already uncordoned (dry run) has:already uncordoned Successful (Bmessage:node/127.0.0.1 already uncordoned (server dry run) has:already uncordoned node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 labeled node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label (BSuccessful (Bmessage:error: cannot specify both a node name and a --selector option See 'kubectl drain -h' for help and examples has:cannot specify both a node name node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label (Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2, (BSuccessful (Bmessage:I0318 12:57:19.540482 59919 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.545299 59919 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.556217 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 7 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.559690 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0318 12:57:19.562498 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzk1Mywic3RhcnQiOiJuYW1lc3BhY2UtMTY3OTE0NDIwMy0xNzM3MS90ZXN0LXBvZC0xXHUwMDAwIn0&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK Successful (Bmessage:I0318 12:57:19.540482 59919 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.545299 59919 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.556217 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 7 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.559690 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0318 12:57:19.562498 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzk1Mywic3RhcnQiOiJuYW1lc3BhY2UtMTY3OTE0NDIwMy0xNzM3MS90ZXN0LXBvZC0xXHUwMDAwIn0&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK Successful (Bmessage:I0318 12:57:19.540482 59919 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.545299 59919 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.556217 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 7 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.559690 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0318 12:57:19.562498 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzk1Mywic3RhcnQiOiJuYW1lc3BhY2UtMTY3OTE0NDIwMy0xNzM3MS90ZXN0LXBvZC0xXHUwMDAwIn0&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/pods?continue=.*&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK Successful (Bmessage:I0318 12:57:19.540482 59919 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.545299 59919 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.556217 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 7 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.559690 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0318 12:57:19.562498 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzk1Mywic3RhcnQiOiJuYW1lc3BhY2UtMTY3OTE0NDIwMy0xNzM3MS90ZXN0LXBvZC0xXHUwMDAwIn0&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:evicting pod .*/test-pod-1 Successful (Bmessage:I0318 12:57:19.540482 59919 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.545299 59919 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.556217 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 7 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.559690 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0318 12:57:19.562498 59919 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzk1Mywic3RhcnQiOiJuYW1lc3BhY2UtMTY3OTE0NDIwMy0xNzM3MS90ZXN0LXBvZC0xXHUwMDAwIn0&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:evicting pod .*/test-pod-2 node/127.0.0.1 already uncordoned node-management.sh:188: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (BSuccessful (Bmessage:I0318 12:57:19.742322 59964 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.747405 59964 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.752839 59964 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=500 200 OK in 2 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.756183 59964 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/nodes?labelSelector=test%3Dlabel&limit=500 200 OK Successful (Bmessage:I0318 12:57:19.742322 59964 loader.go:373] Config loaded from file: /tmp/tmp.JFDEKO8UeQ/.kube/config I0318 12:57:19.747405 59964 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0318 12:57:19.752839 59964 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=500 200 OK in 2 milliseconds node/127.0.0.1 cordoned (dry run) I0318 12:57:19.756183 59964 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK in 1 milliseconds Warning: deleting Pods that declare no controller: namespace-1679144203-17371/test-pod-1, namespace-1679144203-17371/test-pod-2 evicting pod namespace-1679144203-17371/test-pod-1 (dry run) evicting pod namespace-1679144203-17371/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK Successful (Bmessage:error: USAGE: cordon NODE [flags] See 'kubectl cordon -h' for help and examples has:error\: USAGE\: cordon NODE node/127.0.0.1 already uncordoned Successful (Bmessage:error: You must provide one or more resources by argument or filename. Example resource specifications include: '-f rsrc.yaml' '--filename=rsrc.json' ' ' '' has:must provide one or more resources Successful (Bmessage:node/127.0.0.1 cordoned has:node/127.0.0.1 cordoned Successful (Bmessage: has not:cordoned node-management.sh:213: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: true (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-1" force deleted Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-2" force deleted +++ exit code: 0 Recording: run_plugins_tests Running command: run_plugins_tests +++ Running case: test-cmd.run_plugins_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_plugins_tests +++ [0318 12:57:20] Testing kubectl plugins Successful (Bmessage:The following compatible plugins are available: test/fixtures/pkg/kubectl/plugins/version/kubectl-version - warning: kubectl-version overwrites existing command: "kubectl version" error: one plugin warning was found has:kubectl-version overwrites existing command: "kubectl version" Successful (Bmessage:The following compatible plugins are available: test/fixtures/pkg/kubectl/plugins/kubectl-foo test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo error: one plugin warning was found has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin Successful (Bmessage:The following compatible plugins are available: test/fixtures/pkg/kubectl/plugins/kubectl-foo has:plugins are available Successful (Bmessage:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping... error: unable to find any kubectl plugins in your PATH has:unable to find any kubectl plugins in your PATH Successful (Bmessage:I am plugin foo has:plugin foo Successful (Bmessage:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1 has:test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1 WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 has:Client Version Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.26+7a1ef208ec9c49", GitCommit:"7a1ef208ec9c49b5ef89572c80995de7f0dd91d7", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 has not:overshadows an existing plugin +++ exit code: 0 Recording: run_impersonation_tests Running command: run_impersonation_tests +++ Running case: test-cmd.run_impersonation_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_impersonation_tests +++ [0318 12:57:20] Testing impersonation Successful (Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user has:without impersonating a user Successful (Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user has:without impersonating a user certificatesigningrequest.certificates.k8s.io/foo created authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1 (Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificatesigningrequest.certificates.k8s.io/foo created authorization.sh:66: Successful get csr/foo {{len .spec.groups}}: 4 (Bauthorization.sh:67: Successful get csr/foo {{range .spec.groups}}{{.}} {{end}}: group2 group1 ,,,chameleon system:authenticated (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificatesigningrequest.certificates.k8s.io/foo created authorization.sh:72: Successful get csr/foo {{.spec.username}}: user1 (Bauthorization.sh:73: Successful get csr/foo {{.spec.uid}}: abc123 (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted +++ exit code: 0 Recording: run_wait_tests Running command: run_wait_tests +++ Running case: test-cmd.run_wait_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_wait_tests +++ [0318 12:57:22] Testing kubectl wait +++ [0318 12:57:22] Creating namespace namespace-1679144242-18814 namespace/namespace-1679144242-18814 created Context "test" modified. deployment.apps/test-1 created I0318 12:57:22.307860 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/test-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-7697bf65f7 to 1" I0318 12:57:22.333985 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/test-1-7697bf65f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-7697bf65f7-5rxnv" deployment.apps/test-2 created I0318 12:57:22.385449 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/test-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-675f68f47d to 1" I0318 12:57:22.399930 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/test-2-675f68f47d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-675f68f47d-tghz4" wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2, (BW0318 12:57:27.867088 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:57:27.867142 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:57:41.927788 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:57:41.927831 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0318 12:57:47.350437 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:57:47.350477 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:error: timed out waiting for the condition on deployments/test-1 has:timed out deployment.apps "test-1" deleted deployment.apps "test-2" deleted Successful (Bmessage:deployment.apps/test-1 condition met deployment.apps/test-2 condition met has:test-1 condition met Successful (Bmessage:deployment.apps/test-1 condition met deployment.apps/test-2 condition met has:test-2 condition met deployment.apps/dtest created W0318 12:57:54.916678 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:57:54.916716 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0318 12:57:54.925515 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/dtest" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dtest-7665fff87c to 3" I0318 12:57:54.951899 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/dtest-7665fff87c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dtest-7665fff87c-wj4tn" I0318 12:57:54.968019 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/dtest-7665fff87c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dtest-7665fff87c-cbqhj" I0318 12:57:54.968048 23056 event.go:307] "Event occurred" object="namespace-1679144242-18814/dtest-7665fff87c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dtest-7665fff87c-wh9cl" wait.sh:82: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: dtest (B real 0m1.058s user 0m0.081s sys 0m0.014s Successful (Bmessage:timed out waiting for the condition on pods/dtest-7665fff87c-cbqhj timed out waiting for the condition on pods/dtest-7665fff87c-wh9cl timed out waiting for the condition on pods/dtest-7665fff87c-wj4tn has:timed out waiting for the condition deployment.apps "dtest" deleted +++ exit code: 0 Recording: run_kubectl_debug_pod_tests Running command: run_kubectl_debug_pod_tests +++ Running case: test-cmd.run_kubectl_debug_pod_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_pod_tests +++ [0318 12:57:56] Creating namespace namespace-1679144276-27542 namespace/namespace-1679144276-27542 created Context "test" modified. +++ [0318 12:57:56] Testing kubectl debug (pod tests) pod/target created debug.sh:32: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:36: Successful get pod/target {{range.spec.ephemeralContainers}}{{.name}}:{{end}}: debug-container: (Bpod "target" deleted pod/target created debug.sh:44: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:48: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:49: Successful get pod/target-copy {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container: (Bdebug.sh:50: Successful get pod/target-copy {{range.spec.containers}}{{.image}}:{{end}}: registry.k8s.io/nginx:1.7.9:busybox: (Bpod "target" deleted pod "target-copy" deleted pod/target created debug.sh:56: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:60: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target-copy: (Bdebug.sh:61: Successful get pod/target-copy {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container: (Bdebug.sh:62: Successful get pod/target-copy {{range.spec.containers}}{{.image}}:{{end}}: registry.k8s.io/nginx:1.7.9:busybox: (Bpod "target-copy" deleted pod/target created debug.sh:68: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:69: Successful get pod/target {{(index .spec.containers 0).name}}: target (Bdebug.sh:73: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:74: Successful get pod/target-copy {{(len .spec.containers)}}:{{(index .spec.containers 0).image}}: 1:busybox (Bpod "target" deleted pod "target-copy" deleted pod/target created debug.sh:80: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:81: Successful get pod/target {{(index .spec.containers 0).name}}: target (Bdebug.sh:86: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:87: Successful get pod/target-copy {{(len .spec.containers)}}:{{(index .spec.containers 0).image}}: 1:busybox (Bpod "target" deleted pod "target-copy" deleted +++ exit code: 0 Recording: run_kubectl_debug_general_tests Running command: run_kubectl_debug_general_tests +++ Running case: test-cmd.run_kubectl_debug_general_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_general_tests +++ [0318 12:57:58] Creating namespace namespace-1679144278-7969 namespace/namespace-1679144278-7969 created Context "test" modified. +++ [0318 12:57:59] Testing kubectl debug profile general pod/target created debug.sh:140: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:144: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:145: Successful get pod/target-copy {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container: (Bdebug.sh:146: Successful get pod/target-copy {{range.spec.containers}}{{.image}}:{{end}}: registry.k8s.io/nginx:1.7.9:busybox: (Bdebug.sh:147: Successful get pod/target-copy {{range.spec.containers}}{{if (index . "livenessProbe")}}:{{end}}{{end}}: (Bdebug.sh:148: Successful get pod/target-copy {{range.spec.containers}}{{if (index . "readinessProbe")}}:{{end}}{{end}}: (Bdebug.sh:149: Successful get pod/target-copy {{(index (index .spec.containers 1).securityContext.capabilities.add 0)}}: SYS_PTRACE (Bdebug.sh:150: Successful get pod/target-copy {{.spec.shareProcessNamespace}}: true (Bpod "target" deleted pod "target-copy" deleted pod/target created debug.sh:159: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:163: Successful get pod/target {{range.spec.ephemeralContainers}}{{.name}}:{{.image}}{{end}}: debug-container:busybox (Bdebug.sh:164: Successful get pod/target {{(index (index .spec.ephemeralContainers 0).securityContext.capabilities.add 0)}}: SYS_PTRACE (Bpod "target" deleted +++ exit code: 0 Recording: run_kubectl_debug_baseline_tests Running command: run_kubectl_debug_baseline_tests +++ Running case: test-cmd.run_kubectl_debug_baseline_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_baseline_tests +++ [0318 12:58:00] Creating namespace namespace-1679144280-11831 namespace/namespace-1679144280-11831 created W0318 12:58:00.456879 23056 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0318 12:58:00.456921 23056 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Context "test" modified. +++ [0318 12:58:00] Testing kubectl debug profile baseline pod/target created debug.sh:219: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:223: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:224: Successful get pod/target-copy {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container: (Bdebug.sh:225: Successful get pod/target-copy {{range.spec.containers}}{{.image}}:{{end}}: registry.k8s.io/nginx:1.7.9:busybox: (Bdebug.sh:226: Successful get pod/target-copy {{range.spec.containers}}{{if (index . "livenessProbe")}}:{{end}}{{end}}: (Bdebug.sh:227: Successful get pod/target-copy {{range.spec.containers}}{{if (index . "readinessProbe")}}:{{end}}{{end}}: (Bdebug.sh:228: Successful get pod/target-copy {{if (index (index .spec.containers 0) "securityContext")}}:{{end}}: (Bdebug.sh:229: Successful get pod/target-copy {{.spec.shareProcessNamespace}}: true (Bpod "target" deleted pod "target-copy" deleted pod/target created debug.sh:238: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:242: Successful get pod/target {{range.spec.ephemeralContainers}}{{.name}}:{{.image}}{{end}}: debug-container:busybox (Bdebug.sh:243: Successful get pod/target {{if (index (index .spec.ephemeralContainers 0) "securityContext")}}:{{end}}: (Bpod "target" deleted +++ exit code: 0 Recording: run_kubectl_debug_node_tests Running command: run_kubectl_debug_node_tests +++ Running case: test-cmd.run_kubectl_debug_node_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_node_tests +++ [0318 12:58:01] Creating namespace namespace-1679144281-10231 namespace/namespace-1679144281-10231 created Context "test" modified. +++ [0318 12:58:01] Testing kubectl debug (pod tests) debug.sh:105: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (BWarning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] debug.sh:109: Successful get pod {{(len .items)}}: 1 (BSuccessful (Bmessage:Creating debugging pod node-debugger-127.0.0.1-p86mq with container debugger on node 127.0.0.1. has:node-debugger-127.0.0.1-p86mq debug.sh:112: Successful get pod/node-debugger-127.0.0.1-p86mq {{(index .spec.containers 0).image}}: busybox (Bdebug.sh:113: Successful get pod/node-debugger-127.0.0.1-p86mq {{.spec.nodeName}}: 127.0.0.1 (Bdebug.sh:114: Successful get pod/node-debugger-127.0.0.1-p86mq {{.spec.hostIPC}}: true (Bdebug.sh:115: Successful get pod/node-debugger-127.0.0.1-p86mq {{.spec.hostNetwork}}: true (Bdebug.sh:116: Successful get pod/node-debugger-127.0.0.1-p86mq {{.spec.hostPID}}: true (Bdebug.sh:117: Successful get pod/node-debugger-127.0.0.1-p86mq {{(index (index .spec.containers 0).volumeMounts 0).mountPath}}: /host (Bdebug.sh:118: Successful get pod/node-debugger-127.0.0.1-p86mq {{(index .spec.volumes 0).hostPath.path}}: / (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "node-debugger-127.0.0.1-p86mq" force deleted +++ exit code: 0 Recording: run_kubectl_debug_general_node_tests Running command: run_kubectl_debug_general_node_tests +++ Running case: test-cmd.run_kubectl_debug_general_node_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_general_node_tests +++ [0318 12:58:02] Creating namespace namespace-1679144282-405 namespace/namespace-1679144282-405 created Context "test" modified. +++ [0318 12:58:02] Testing kubectl debug profile general (node) debug.sh:183: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (BWarning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] debug.sh:187: Successful get pod {{(len .items)}}: 1 (BSuccessful (Bmessage:Creating debugging pod node-debugger-127.0.0.1-69jn8 with container debugger on node 127.0.0.1. has:node-debugger-127.0.0.1-69jn8 debug.sh:190: Successful get pod/node-debugger-127.0.0.1-69jn8 {{(index .spec.containers 0).image}}: busybox (Bdebug.sh:191: Successful get pod/node-debugger-127.0.0.1-69jn8 {{.spec.nodeName}}: 127.0.0.1 (Bdebug.sh:192: Successful get pod/node-debugger-127.0.0.1-69jn8 {{.spec.hostIPC}}: true (Bdebug.sh:193: Successful get pod/node-debugger-127.0.0.1-69jn8 {{.spec.hostNetwork}}: true (Bdebug.sh:194: Successful get pod/node-debugger-127.0.0.1-69jn8 {{.spec.hostPID}}: true (Bdebug.sh:195: Successful get pod/node-debugger-127.0.0.1-69jn8 {{(index (index .spec.containers 0).volumeMounts 0).mountPath}}: /host (Bdebug.sh:196: Successful get pod/node-debugger-127.0.0.1-69jn8 {{(index .spec.volumes 0).hostPath.path}}: / (Bdebug.sh:197: Successful get pod/node-debugger-127.0.0.1-69jn8 {{if (index (index .spec.containers 0) "securityContext")}}:{{end}}: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "node-debugger-127.0.0.1-69jn8" force deleted +++ exit code: 0 Recording: run_kubectl_debug_baseline_node_tests Running command: run_kubectl_debug_baseline_node_tests +++ Running case: test-cmd.run_kubectl_debug_baseline_node_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_baseline_node_tests +++ [0318 12:58:03] Creating namespace namespace-1679144283-20149 namespace/namespace-1679144283-20149 created Context "test" modified. +++ [0318 12:58:03] Testing kubectl debug profile baseline (node) debug.sh:262: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (BWarning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] debug.sh:266: Successful get pod {{(len .items)}}: 1 (BSuccessful (Bmessage:Creating debugging pod node-debugger-127.0.0.1-fzhcm with container debugger on node 127.0.0.1. has:node-debugger-127.0.0.1-fzhcm debug.sh:269: Successful get pod/node-debugger-127.0.0.1-fzhcm {{(index .spec.containers 0).image}}: busybox (Bdebug.sh:270: Successful get pod/node-debugger-127.0.0.1-fzhcm {{.spec.nodeName}}: 127.0.0.1 (Bdebug.sh:271: Successful get pod/node-debugger-127.0.0.1-fzhcm {{.spec.hostIPC}}: (Bdebug.sh:272: Successful get pod/node-debugger-127.0.0.1-fzhcm {{.spec.hostNetwork}}: (Bdebug.sh:273: Successful get pod/node-debugger-127.0.0.1-fzhcm {{.spec.hostPID}}: (Bdebug.sh:274: Successful get pod/node-debugger-127.0.0.1-fzhcm {{if (index (index .spec.containers 0) "securityContext")}}:{{end}}: (BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "node-debugger-127.0.0.1-fzhcm" force deleted +++ exit code: 0 Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. No resources found Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. No resources found FAILED TESTS: run_kubectl_request_timeout_tests, junit report dir: /logs/artifacts +++ [0318 12:58:04] Clean up complete make: *** [Makefile:293: test-cmd] Error 1 + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker.