Docker in Docker enabled, initializing... ================================================================================ Starting Docker: docker. Waiting for docker to be ready, sleeping for 1 seconds. ================================================================================ Done setting up docker in docker. Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com] + WRAPPED_COMMAND_PID=174 + wait 174 + ./hack/jenkins/test-dockerized.sh + export PATH=/home/prow/go/bin:/home/prow/go/src/k8s.io/kubernetes/third_party/etcd:/usr/local/go/bin:/home/prow/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + PATH=/home/prow/go/bin:/home/prow/go/src/k8s.io/kubernetes/third_party/etcd:/usr/local/go/bin:/home/prow/go/bin:/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/workspace:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + export GO111MODULE=off + GO111MODULE=off + pushd ./hack/tools + GO111MODULE=on + go install gotest.tools/gotestsum go: downloading gotest.tools/gotestsum v1.6.4 go: downloading github.com/dnephin/pflag v1.0.7 go: downloading golang.org/x/tools v0.1.10 go: downloading github.com/fatih/color v1.13.0 go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 go: downloading github.com/pkg/errors v0.9.1 go: downloading github.com/jonboulle/clockwork v0.2.2 go: downloading golang.org/x/crypto v0.0.0-20220214200702-86341886e292 go: downloading golang.org/x/sync v0.0.0-20210220032951-036812b2e83c go: downloading github.com/fsnotify/fsnotify v1.5.1 go: downloading golang.org/x/sys v0.0.0-20220209214540-3681064d5158 go: downloading github.com/mattn/go-colorable v0.1.12 go: downloading github.com/mattn/go-isatty v0.0.14 go: downloading golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go: downloading golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 + popd + export KUBE_COVER=n + KUBE_COVER=n + export ARTIFACTS=/logs/artifacts + ARTIFACTS=/logs/artifacts + export KUBE_KEEP_VERBOSE_TEST_OUTPUT=y + KUBE_KEEP_VERBOSE_TEST_OUTPUT=y + export KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=4 + KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=4 + export LOG_LEVEL=4 + LOG_LEVEL=4 + cd /home/prow/go/src/k8s.io/kubernetes + make generated_files +++ [0513 22:20:04] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) +++ [0513 22:20:14] Building go targets for linux/amd64 k8s.io/code-generator/cmd/prerelease-lifecycle-gen (non-static) +++ [0513 22:20:19] Generating prerelease lifecycle code for 26 targets +++ [0513 22:20:21] Building go targets for linux/amd64 k8s.io/code-generator/cmd/deepcopy-gen (non-static) +++ [0513 22:20:23] Generating deepcopy code for 236 targets +++ [0513 22:20:29] Building go targets for linux/amd64 k8s.io/code-generator/cmd/defaulter-gen (non-static) +++ [0513 22:20:30] Generating defaulter code for 92 targets +++ [0513 22:20:38] Building go targets for linux/amd64 k8s.io/code-generator/cmd/conversion-gen (non-static) +++ [0513 22:20:39] Generating conversion code for 129 targets +++ [0513 22:20:56] Building go targets for linux/amd64 k8s.io/kube-openapi/cmd/openapi-gen (non-static) +++ [0513 22:21:03] Generating openapi code for KUBE +++ [0513 22:21:24] Generating openapi code for AGGREGATOR +++ [0513 22:21:25] Generating openapi code for APIEXTENSIONS +++ [0513 22:21:27] Generating openapi code for CODEGEN +++ [0513 22:21:28] Generating openapi code for SAMPLEAPISERVER + go install ./cmd/... + ./hack/install-etcd.sh Downloading https://github.com/coreos/etcd/releases/download/v3.5.3/etcd-v3.5.3-linux-amd64.tar.gz succeed etcd v3.5.3 installed. To use: export PATH="/home/prow/go/src/k8s.io/kubernetes/third_party/etcd:${PATH}" + make test-cmd +++ [0513 22:25:18] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) Recording: record_command_canary Running command: record_command_canary +++ Running case: test-cmd.record_command_canary +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: record_command_canary /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 162: bogus-expected-to-fail: command not found !!! [0513 22:25:20] Call tree: !!! [0513 22:25:20] 1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...) !!! [0513 22:25:20] 2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...) !!! [0513 22:25:20] 3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:138 juLog(...) !!! [0513 22:25:20] 4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:166 record_command(...) !!! [0513 22:25:20] 5: hack/make-rules/test-cmd.sh:35 source(...) +++ exit code: 1 +++ error: 1 +++ [0513 22:25:20] Running kubeadm tests +++ [0513 22:25:22] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) +++ [0513 22:25:25] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kubeadm (static) +++ [0513 22:26:08] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) +++ [0513 22:26:11] Running tests without code coverage {"Time":"2022-05-13T22:26:49.052549902Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t34.757s\n"} ✓ cmd/kubeadm/test/cmd (34.76s) DONE 61 tests in 0.002s processing junit xml file : /logs/artifacts/junit_20220513-222611.xml done. +++ [0513 22:26:49] Saved JUnit XML test report to /logs/artifacts/junit_20220513-222611.xml +++ [0513 22:26:49] Running kubectl tests for kube-apiserver etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.YOjRZNx7Z4 --listen-client-urls http://127.0.0.1:2379 --log-level=warn 2> "/logs/artifacts/etcd.919e94a4-d30a-11ec-a96f-722d8496ef8a.root.log.DEBUG.20220513-222649.30065" >/dev/null Waiting for etcd to come up. +++ [0513 22:26:50] On try 2, etcd: : {"health":"true","reason":""} {"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}+++ [0513 22:26:50] Building kubectl +++ [0513 22:26:51] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) +++ [0513 22:26:54] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kubectl (static) k8s.io/kubernetes/cmd/kubectl-convert (non-static) +++ [0513 22:27:28] Running kubectl with no options kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/ Basic Commands (Beginner): create Create a resource from a file or from stdin expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Get documentation for a resource get Display one or many resources edit Edit a resource on the server delete Delete resources by file names, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a deployment, replica set, or replication controller autoscale Auto-scale a deployment, replica set, stateful set, or replication controller Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster information top Display resource (CPU/memory) usage cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers auth Inspect authorization debug Create debugging sessions for troubleshooting workloads and nodes Advanced Commands: diff Diff the live version against a would-be applied version apply Apply a configuration to a resource by file name or stdin patch Update fields of a resource replace Replace a resource by file name or stdin wait Experimental: Wait for a specific condition on one or many resources kustomize Build a kustomization target from a directory or URL. Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash, zsh or fish) Other Commands: alpha Commands for features in alpha api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). User "test-admin" set. Cluster "local" set. Context "test" created. Switched to context "test". apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://127.0.0.1:6443 name: local contexts: - context: cluster: local user: test-admin name: test current-context: test kind: Config preferences: {} users: - name: test-admin user: token: REDACTED +++ [0513 22:27:28] Setup complete +++ [0513 22:27:28] Building kube-apiserver +++ [0513 22:27:29] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) +++ [0513 22:27:33] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kube-apiserver (static) +++ [0513 22:28:59] Starting kube-apiserver W0513 22:28:59.835947 53075 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release. I0513 22:28:59.992909 53075 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key) I0513 22:28:59.992930 53075 server.go:558] external host was not specified, using 10.34.203.8 W0513 22:28:59.992945 53075 authentication.go:526] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer I0513 22:28:59.993502 53075 server.go:158] Version: v1.25.0-alpha.0.494+344185089155f1 I0513 22:28:59.993541 53075 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" W0513 22:29:00.432401 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432423 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432432 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432675 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432699 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432735 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432744 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.432762 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433085 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433170 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433703 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433734 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433755 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433838 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.433915 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:00.433934 53075 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,RuntimeClass,DefaultIngressClass. I0513 22:29:00.433943 53075 plugins.go:161] Loaded 9 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ResourceQuota. W0513 22:29:00.434060 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.434080 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:00.435033 53075 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,RuntimeClass,DefaultIngressClass. I0513 22:29:00.435050 53075 plugins.go:161] Loaded 9 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ResourceQuota. W0513 22:29:00.463226 53075 genericapiserver.go:590] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.463351 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:00.464189 53075 instance.go:273] Using reconciler: lease I0513 22:29:00.554925 53075 instance.go:586] API group "internal.apiserver.k8s.io" is not enabled, skipping. W0513 22:29:00.692601 53075 genericapiserver.go:590] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.694411 53075 genericapiserver.go:590] Skipping API authorization.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.698530 53075 genericapiserver.go:590] Skipping API autoscaling/v2beta1 because it has no resources. W0513 22:29:00.703647 53075 genericapiserver.go:590] Skipping API batch/v1beta1 because it has no resources. W0513 22:29:00.705735 53075 genericapiserver.go:590] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.707685 53075 genericapiserver.go:590] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.707734 53075 genericapiserver.go:590] Skipping API discovery.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.712703 53075 genericapiserver.go:590] Skipping API networking.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.714586 53075 genericapiserver.go:590] Skipping API node.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.714610 53075 genericapiserver.go:590] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0513 22:29:00.714666 53075 genericapiserver.go:590] Skipping API policy/v1beta1 because it has no resources. W0513 22:29:00.719718 53075 genericapiserver.go:590] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.719740 53075 genericapiserver.go:590] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0513 22:29:00.721627 53075 genericapiserver.go:590] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.721661 53075 genericapiserver.go:590] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0513 22:29:00.727032 53075 genericapiserver.go:590] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0513 22:29:00.732278 53075 genericapiserver.go:590] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0513 22:29:00.737481 53075 genericapiserver.go:590] Skipping API apps/v1beta2 because it has no resources. W0513 22:29:00.737504 53075 genericapiserver.go:590] Skipping API apps/v1beta1 because it has no resources. W0513 22:29:00.739830 53075 genericapiserver.go:590] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.741908 53075 genericapiserver.go:590] Skipping API events.k8s.io/v1beta1 because it has no resources. I0513 22:29:00.742652 53075 plugins.go:158] Loaded 6 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,RuntimeClass,DefaultIngressClass. I0513 22:29:00.742668 53075 plugins.go:161] Loaded 9 validating admission controller(s) successfully in the following order: LimitRanger,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ResourceQuota. W0513 22:29:00.744088 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:00.761168 53075 genericapiserver.go:590] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. W0513 22:29:00.761523 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:01.897195 53075 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::hack/testdata/ca/ca.crt" I0513 22:29:01.897456 53075 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key" I0513 22:29:01.897552 53075 secure_serving.go:210] Serving securely on 127.0.0.1:6443 I0513 22:29:01.897654 53075 apiservice_controller.go:97] Starting APIServiceRegistrationController I0513 22:29:01.897674 53075 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0513 22:29:01.897772 53075 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0513 22:29:01.897991 53075 controller.go:85] Starting OpenAPI controller I0513 22:29:01.898040 53075 available_controller.go:491] Starting AvailableConditionController I0513 22:29:01.898049 53075 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0513 22:29:01.898107 53075 autoregister_controller.go:141] Starting autoregister controller I0513 22:29:01.898121 53075 cache.go:32] Waiting for caches to sync for autoregister controller I0513 22:29:01.897685 53075 customresource_discovery_controller.go:209] Starting DiscoveryController I0513 22:29:01.898991 53075 controller.go:85] Starting OpenAPI V3 controller I0513 22:29:01.899038 53075 naming_controller.go:291] Starting NamingConditionController I0513 22:29:01.899056 53075 establishing_controller.go:76] Starting EstablishingController I0513 22:29:01.899103 53075 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0513 22:29:01.899177 53075 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0513 22:29:01.899265 53075 crd_finalizer.go:266] Starting CRDFinalizer I0513 22:29:01.898168 53075 controller.go:83] Starting OpenAPI AggregationController I0513 22:29:01.898187 53075 apf_controller.go:317] Starting API Priority and Fairness config controller I0513 22:29:01.898329 53075 controller.go:80] Starting OpenAPI V3 AggregationController W0513 22:29:01.898458 53075 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:01.899720 53075 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0513 22:29:01.899741 53075 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller I0513 22:29:01.898779 53075 crdregistration_controller.go:111] Starting crd-autoregister controller I0513 22:29:01.899759 53075 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0513 22:29:01.899796 53075 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::hack/testdata/ca/ca.crt" I0513 22:29:01.964259 53075 controller.go:611] quota admission added evaluator for: namespaces I0513 22:29:01.998224 53075 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0513 22:29:01.998263 53075 cache.go:39] Caches are synced for AvailableConditionController controller I0513 22:29:01.998262 53075 cache.go:39] Caches are synced for autoregister controller I0513 22:29:01.999786 53075 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0513 22:29:01.999816 53075 shared_informer.go:262] Caches are synced for crd-autoregister I0513 22:29:01.999865 53075 apf_controller.go:322] Running API Priority and Fairness config worker I0513 22:29:02.662065 53075 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0513 22:29:02.904872 53075 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0513 22:29:02.915916 53075 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0513 22:29:02.915947 53075 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0513 22:29:03.960103 53075 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0513 22:29:04.036134 53075 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0513 22:29:04.182225 53075 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.0.0.1] W0513 22:29:04.201795 53075 lease.go:250] Resetting endpoints for master service "kubernetes" to [10.34.203.8] I0513 22:29:04.202705 53075 controller.go:611] quota admission added evaluator for: endpoints I0513 22:29:04.210860 53075 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io +++ [0513 22:29:04] On try 5, apiserver: ok +++ [0513 22:29:04] Building kube-controller-manager +++ [0513 22:29:06] Building go targets for linux/amd64 k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static) +++ [0513 22:29:10] Building go targets for linux/amd64 k8s.io/kubernetes/cmd/kube-controller-manager (static) +++ [0513 22:29:37] Generate kubeconfig for controller-manager +++ [0513 22:29:37] Starting controller-manager I0513 22:29:38.466373 56663 serving.go:348] Generated self-signed cert in-memory W0513 22:29:39.460472 56663 authentication.go:423] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W0513 22:29:39.460520 56663 authentication.go:317] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. W0513 22:29:39.460529 56663 authentication.go:341] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. W0513 22:29:39.460544 56663 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W0513 22:29:39.460558 56663 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. I0513 22:29:39.460580 56663 controllermanager.go:180] Version: v1.25.0-alpha.0.494+344185089155f1 I0513 22:29:39.460592 56663 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0513 22:29:39.461742 56663 secure_serving.go:210] Serving securely on [::]:10257 I0513 22:29:39.461943 56663 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0513 22:29:39.462033 56663 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager... +++ [0513 22:29:39] On try 2, controller-manager: ok I0513 22:29:39.477451 53075 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0513 22:29:39.480549 56663 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager I0513 22:29:39.480771 56663 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="919e94a4-d30a-11ec-a96f-722d8496ef8a_53381a52-014b-4b04-8852-ebf5fbee262f became leader" W0513 22:29:39.503067 56663 controllermanager.go:615] "serviceaccount-token" is disabled because there is no private key W0513 22:29:39.503412 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.508466 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.508549 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.508644 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508692 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io W0513 22:29:39.508728 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.508750 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508777 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io W0513 22:29:39.508796 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508811 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling W0513 22:29:39.508824 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508838 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for cronjobs.batch W0513 22:29:39.508858 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508875 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpoints W0513 22:29:39.508885 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.508938 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508959 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for replicasets.apps W0513 22:29:39.508972 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.508990 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for serviceaccounts W0513 22:29:39.509033 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509058 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for controllerrevisions.apps W0513 22:29:39.509082 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509106 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy W0513 22:29:39.509147 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509171 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io W0513 22:29:39.509199 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509221 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for podtemplates W0513 22:29:39.509233 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509246 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for limitranges W0513 22:29:39.509282 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509305 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for leases.coordination.k8s.io W0513 22:29:39.509317 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509330 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io W0513 22:29:39.509340 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.509358 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.509378 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509426 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for statefulsets.apps W0513 22:29:39.509448 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509464 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io W0513 22:29:39.509486 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509508 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for deployments.apps W0513 22:29:39.509517 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509533 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for daemonsets.apps W0513 22:29:39.509553 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509568 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for jobs.batch W0513 22:29:39.509579 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509600 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io W0513 22:29:39.509618 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.509637 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0513 22:29:39.509673 56663 controllermanager.go:593] Started "resourcequota" I0513 22:29:39.510093 56663 controllermanager.go:593] Started "horizontalpodautoscaling" W0513 22:29:39.510287 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.510311 56663 controllermanager.go:593] Started "root-ca-cert-publisher" I0513 22:29:39.510513 56663 controllermanager.go:593] Started "ephemeral-volume" I0513 22:29:39.510959 56663 resource_quota_controller.go:273] Starting resource quota controller I0513 22:29:39.510983 56663 shared_informer.go:255] Waiting for caches to sync for resource quota I0513 22:29:39.511008 56663 controller.go:170] Starting ephemeral volume controller I0513 22:29:39.511016 56663 shared_informer.go:255] Waiting for caches to sync for ephemeral I0513 22:29:39.511017 56663 horizontal.go:168] Starting HPA controller I0513 22:29:39.511151 56663 shared_informer.go:255] Waiting for caches to sync for HPA I0513 22:29:39.510964 56663 controllermanager.go:593] Started "garbagecollector" I0513 22:29:39.511511 56663 controllermanager.go:593] Started "replicaset" I0513 22:29:39.511759 56663 controllermanager.go:593] Started "disruption" I0513 22:29:39.511999 56663 node_lifecycle_controller.go:377] Sending events to api server. I0513 22:29:39.512047 56663 disruption.go:363] Starting disruption controller I0513 22:29:39.512063 56663 shared_informer.go:255] Waiting for caches to sync for disruption I0513 22:29:39.512008 56663 replica_set.go:205] Starting replicaset controller I0513 22:29:39.512081 56663 shared_informer.go:255] Waiting for caches to sync for ReplicaSet I0513 22:29:39.511055 56663 resource_quota_monitor.go:308] QuotaMonitor running I0513 22:29:39.511066 56663 garbagecollector.go:149] Starting garbage collector controller I0513 22:29:39.512099 56663 shared_informer.go:255] Waiting for caches to sync for garbage collector I0513 22:29:39.511041 56663 publisher.go:107] Starting root CA certificate configmap publisher I0513 22:29:39.512120 56663 shared_informer.go:255] Waiting for caches to sync for crt configmap I0513 22:29:39.512144 56663 graph_builder.go:289] GraphBuilder running W0513 22:29:39.512158 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.512200 56663 taint_manager.go:163] "Sending events to api server" I0513 22:29:39.512270 56663 node_lifecycle_controller.go:505] Controller will reconcile labels. I0513 22:29:39.512305 56663 controllermanager.go:593] Started "nodelifecycle" I0513 22:29:39.512433 56663 node_lifecycle_controller.go:539] Starting node controller I0513 22:29:39.512451 56663 shared_informer.go:255] Waiting for caches to sync for taint W0513 22:29:39.512452 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.512511 56663 controllermanager.go:593] Started "clusterrole-aggregation" W0513 22:29:39.513029 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.513767 56663 controllermanager.go:593] Started "csrsigning" W0513 22:29:39.514095 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.514329 56663 controllermanager.go:593] Started "persistentvolume-expander" W0513 22:29:39.514354 56663 controllermanager.go:558] "tokencleaner" is disabled I0513 22:29:39.514524 56663 node_lifecycle_controller.go:77] Sending events to api server E0513 22:29:39.514556 56663 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided W0513 22:29:39.514570 56663 controllermanager.go:571] Skipping "cloud-node-lifecycle" I0513 22:29:39.515293 56663 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator I0513 22:29:39.515317 56663 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator I0513 22:29:39.515482 56663 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-serving" I0513 22:29:39.515491 56663 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0513 22:29:39.515497 56663 controllermanager.go:593] Started "endpoint" I0513 22:29:39.515762 56663 certificate_controller.go:119] Starting certificate controller "csrsigning-kubelet-client" I0513 22:29:39.515773 56663 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0513 22:29:39.515950 56663 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0513 22:29:39.515994 56663 controllermanager.go:593] Started "podgc" I0513 22:29:39.516971 56663 controllermanager.go:593] Started "deployment" I0513 22:29:39.516994 56663 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0513 22:29:39.517026 56663 deployment_controller.go:153] "Starting controller" controller="deployment" I0513 22:29:39.517036 56663 shared_informer.go:255] Waiting for caches to sync for deployment I0513 22:29:39.517213 56663 certificate_controller.go:119] Starting certificate controller "csrsigning-kube-apiserver-client" I0513 22:29:39.517233 56663 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0513 22:29:39.517612 56663 certificate_controller.go:119] Starting certificate controller "csrsigning-legacy-unknown" I0513 22:29:39.517638 56663 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0513 22:29:39.517855 56663 expand_controller.go:341] Starting expand controller I0513 22:29:39.517871 56663 shared_informer.go:255] Waiting for caches to sync for expand I0513 22:29:39.517961 56663 endpoints_controller.go:178] Starting endpoint controller I0513 22:29:39.517968 56663 shared_informer.go:255] Waiting for caches to sync for endpoint I0513 22:29:39.518171 56663 gc_controller.go:92] Starting GC controller I0513 22:29:39.518199 56663 shared_informer.go:255] Waiting for caches to sync for GC I0513 22:29:39.518323 56663 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" I0513 22:29:39.519525 56663 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key" W0513 22:29:39.516999 56663 controllermanager.go:558] "bootstrapsigner" is disabled W0513 22:29:39.519559 56663 controllermanager.go:571] Skipping "nodeipam" I0513 22:29:39.521209 56663 controllermanager.go:593] Started "endpointslice" I0513 22:29:39.521333 56663 endpointslice_controller.go:257] Starting endpoint slice controller I0513 22:29:39.521355 56663 shared_informer.go:255] Waiting for caches to sync for endpoint_slice I0513 22:29:39.522121 56663 controllermanager.go:593] Started "csrapproving" I0513 22:29:39.522287 56663 controllermanager.go:593] Started "csrcleaner" I0513 22:29:39.522661 56663 certificate_controller.go:119] Starting certificate controller "csrapproving" I0513 22:29:39.522689 56663 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving I0513 22:29:39.522708 56663 cleaner.go:82] Starting CSR cleaner controller I0513 22:29:39.527208 56663 controllermanager.go:593] Started "namespace" I0513 22:29:39.527300 56663 namespace_controller.go:200] Starting namespace controller I0513 22:29:39.527317 56663 shared_informer.go:255] Waiting for caches to sync for namespace I0513 22:29:39.527503 56663 controllermanager.go:593] Started "daemonset" I0513 22:29:39.527640 56663 daemon_controller.go:284] Starting daemon sets controller I0513 22:29:39.527650 56663 shared_informer.go:255] Waiting for caches to sync for daemon sets I0513 22:29:39.527812 56663 controllermanager.go:593] Started "job" I0513 22:29:39.527839 56663 job_controller.go:184] Starting job controller I0513 22:29:39.527850 56663 shared_informer.go:255] Waiting for caches to sync for job I0513 22:29:39.528011 56663 controllermanager.go:593] Started "ttl" I0513 22:29:39.528028 56663 core.go:221] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W0513 22:29:39.528035 56663 controllermanager.go:571] Skipping "route" I0513 22:29:39.528142 56663 ttl_controller.go:120] Starting TTL controller I0513 22:29:39.528158 56663 shared_informer.go:255] Waiting for caches to sync for TTL W0513 22:29:39.528248 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.528267 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.528278 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.528379 56663 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. I0513 22:29:39.528879 56663 controllermanager.go:593] Started "attachdetach" I0513 22:29:39.529002 56663 attach_detach_controller.go:328] Starting attach detach controller I0513 22:29:39.529019 56663 shared_informer.go:255] Waiting for caches to sync for attach detach I0513 22:29:39.529186 56663 controllermanager.go:593] Started "ttl-after-finished" I0513 22:29:39.529206 56663 ttlafterfinished_controller.go:109] Starting TTL after finished controller I0513 22:29:39.529218 56663 shared_informer.go:255] Waiting for caches to sync for TTL after finished I0513 22:29:39.529502 56663 controllermanager.go:593] Started "endpointslicemirroring" I0513 22:29:39.529585 56663 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller I0513 22:29:39.529603 56663 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring I0513 22:29:39.529808 56663 controllermanager.go:593] Started "replicationcontroller" I0513 22:29:39.529953 56663 replica_set.go:205] Starting replicationcontroller controller I0513 22:29:39.529971 56663 shared_informer.go:255] Waiting for caches to sync for ReplicationController I0513 22:29:39.530014 56663 controllermanager.go:593] Started "serviceaccount" I0513 22:29:39.530229 56663 serviceaccounts_controller.go:117] Starting service account controller I0513 22:29:39.530249 56663 shared_informer.go:255] Waiting for caches to sync for service account I0513 22:29:39.530292 56663 controllermanager.go:593] Started "statefulset" I0513 22:29:39.530425 56663 stateful_set.go:147] Starting stateful set controller I0513 22:29:39.530441 56663 shared_informer.go:255] Waiting for caches to sync for stateful set I0513 22:29:39.530556 56663 controllermanager.go:593] Started "cronjob" I0513 22:29:39.530686 56663 cronjob_controllerv2.go:135] "Starting cronjob controller v2" I0513 22:29:39.530773 56663 shared_informer.go:255] Waiting for caches to sync for cronjob E0513 22:29:39.530833 56663 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0513 22:29:39.530856 56663 controllermanager.go:571] Skipping "service" W0513 22:29:39.531340 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.531381 56663 controllermanager.go:593] Started "persistentvolume-binder" I0513 22:29:39.531599 56663 controllermanager.go:593] Started "pvc-protection" I0513 22:29:39.531675 56663 pv_controller_base.go:311] Starting persistent volume controller I0513 22:29:39.531690 56663 shared_informer.go:255] Waiting for caches to sync for persistent volume I0513 22:29:39.531785 56663 pvc_protection_controller.go:103] "Starting PVC protection controller" I0513 22:29:39.531825 56663 shared_informer.go:255] Waiting for caches to sync for PVC protection I0513 22:29:39.531801 56663 controllermanager.go:593] Started "pv-protection" I0513 22:29:39.531815 56663 pv_protection_controller.go:79] Starting PV protection controller I0513 22:29:39.531944 56663 shared_informer.go:255] Waiting for caches to sync for PV protection I0513 22:29:39.533667 56663 shared_informer.go:255] Waiting for caches to sync for resource quota W0513 22:29:39.552399 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.553458 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.553510 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.553638 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.554024 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.554223 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.554263 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.554552 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.554771 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. W0513 22:29:39.555269 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:29:39.555920 56663 shared_informer.go:255] Waiting for caches to sync for garbage collector I0513 22:29:39.612201 56663 shared_informer.go:262] Caches are synced for ephemeral I0513 22:29:39.612243 56663 shared_informer.go:262] Caches are synced for crt configmap I0513 22:29:39.612253 56663 shared_informer.go:262] Caches are synced for HPA I0513 22:29:39.615811 56663 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving I0513 22:29:39.615832 56663 shared_informer.go:262] Caches are synced for ClusterRoleAggregator I0513 22:29:39.615854 56663 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client I0513 22:29:39.618221 56663 shared_informer.go:262] Caches are synced for expand I0513 22:29:39.618225 56663 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown I0513 22:29:39.618250 56663 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client I0513 22:29:39.622872 56663 shared_informer.go:262] Caches are synced for certificate-csrapproving I0513 22:29:39.628369 56663 shared_informer.go:262] Caches are synced for job I0513 22:29:39.628394 56663 shared_informer.go:262] Caches are synced for namespace I0513 22:29:39.629650 56663 shared_informer.go:262] Caches are synced for TTL after finished I0513 22:29:39.630869 56663 shared_informer.go:262] Caches are synced for ReplicationController I0513 22:29:39.631005 56663 shared_informer.go:262] Caches are synced for stateful set I0513 22:29:39.631020 56663 shared_informer.go:262] Caches are synced for service account I0513 22:29:39.631873 56663 shared_informer.go:262] Caches are synced for PVC protection I0513 22:29:39.632022 56663 shared_informer.go:262] Caches are synced for PV protection I0513 22:29:39.632768 53075 controller.go:611] quota admission added evaluator for: serviceaccounts I0513 22:29:39.712814 56663 shared_informer.go:262] Caches are synced for disruption I0513 22:29:39.712844 56663 disruption.go:371] Sending events to api server. I0513 22:29:39.712821 56663 shared_informer.go:262] Caches are synced for ReplicaSet I0513 22:29:39.718005 56663 shared_informer.go:262] Caches are synced for deployment I0513 22:29:39.730859 56663 shared_informer.go:262] Caches are synced for cronjob I0513 22:29:39.812876 56663 shared_informer.go:262] Caches are synced for taint I0513 22:29:39.812982 56663 taint_manager.go:187] "Starting NoExecuteTaintManager" I0513 22:29:39.818124 56663 shared_informer.go:262] Caches are synced for endpoint I0513 22:29:39.818252 56663 shared_informer.go:262] Caches are synced for GC I0513 22:29:39.822394 56663 shared_informer.go:262] Caches are synced for endpoint_slice I0513 22:29:39.828640 56663 shared_informer.go:262] Caches are synced for TTL I0513 22:29:39.828802 56663 shared_informer.go:262] Caches are synced for daemon sets I0513 22:29:39.830000 56663 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring I0513 22:29:39.832339 56663 shared_informer.go:262] Caches are synced for persistent volume I0513 22:29:39.911158 56663 shared_informer.go:262] Caches are synced for resource quota I0513 22:29:39.929730 56663 shared_informer.go:262] Caches are synced for attach detach I0513 22:29:39.933938 56663 shared_informer.go:262] Caches are synced for resource quota I0513 22:29:40.312671 56663 shared_informer.go:262] Caches are synced for garbage collector I0513 22:29:40.312706 56663 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0513 22:29:40.356237 56663 shared_informer.go:262] Caches are synced for garbage collector node/127.0.0.1 created W0513 22:29:40.452175 56663 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist +++ [0513 22:29:40] Checking kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 443/TCP 36s Recording: run_kubectl_version_tests Running command: run_kubectl_version_tests +++ Running case: test-cmd.run_kubectl_version_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_version_tests +++ [0513 22:29:40] Testing kubectl version { "major": "1", "minor": "25+", "gitVersion": "v1.25.0-alpha.0.494+344185089155f1", "gitCommit": "344185089155f1413d7121814ac8a1a6b218e0de", "gitTreeState": "clean", "buildDate": "2022-05-13T21:24:06Z", "goVersion": "go1.18.1", "compiler": "gc", "platform": "linux/amd64" }WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. +++ [0513 22:29:41] Testing kubectl version: check client only output matches expected output WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful: the flag '--client' shows correct client info (BSuccessful: the flag '--client' correctly has no server version info (B+++ [0513 22:29:41] Testing kubectl version: verify json output Successful: --output json has correct client info (BSuccessful: --output json has correct server info (B+++ [0513 22:29:41] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion Successful: --client --output json has correct client info (BSuccessful: --client --output json has no server info (B+++ [0513 22:29:41] Testing kubectl version: compare json output using additional --short flag Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Successful: --short --output client json info is equal to non short result (BSuccessful: --short --output server json info is equal to non short result (B+++ [0513 22:29:41] Testing kubectl version: compare json output with yaml output Successful: --output json/yaml has identical information (B+++ [0513 22:29:41] Testing kubectl version: contains semantic version of embedded kustomize WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} has not:Kustomize Version\: unknown Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} has:Kustomize Version\: v[[:digit:]][[:digit:]]*\.[[:digit:]][[:digit:]]*\.[[:digit:]][[:digit:]]* +++ [0513 22:29:41] Testing kubectl version: all output formats include kustomize version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 has:Kustomize Version Flag --short has been deprecated, and will be removed in the future. The --short output will become the default. Successful (Bmessage:Client Version: v1.25.0-alpha.0.494+344185089155f1 Kustomize Version: v4.5.4 Server Version: v1.25.0-alpha.0.494+344185089155f1 has:Kustomize Version Successful (Bmessage:clientVersion: buildDate: "2022-05-13T21:24:06Z" compiler: gc gitCommit: 344185089155f1413d7121814ac8a1a6b218e0de gitTreeState: clean gitVersion: v1.25.0-alpha.0.494+344185089155f1 goVersion: go1.18.1 major: "1" minor: 25+ platform: linux/amd64 kustomizeVersion: v4.5.4 serverVersion: buildDate: "2022-05-13T21:24:06Z" compiler: gc gitCommit: 344185089155f1413d7121814ac8a1a6b218e0de gitTreeState: clean gitVersion: v1.25.0-alpha.0.494+344185089155f1 goVersion: go1.18.1 major: "1" minor: 25+ platform: linux/amd64 has:kustomizeVersion Successful (Bmessage:{ "clientVersion": { "major": "1", "minor": "25+", "gitVersion": "v1.25.0-alpha.0.494+344185089155f1", "gitCommit": "344185089155f1413d7121814ac8a1a6b218e0de", "gitTreeState": "clean", "buildDate": "2022-05-13T21:24:06Z", "goVersion": "go1.18.1", "compiler": "gc", "platform": "linux/amd64" }, "kustomizeVersion": "v4.5.4", "serverVersion": { "major": "1", "minor": "25+", "gitVersion": "v1.25.0-alpha.0.494+344185089155f1", "gitCommit": "344185089155f1413d7121814ac8a1a6b218e0de", "gitTreeState": "clean", "buildDate": "2022-05-13T21:24:06Z", "goVersion": "go1.18.1", "compiler": "gc", "platform": "linux/amd64" } } has:kustomizeVersion +++ exit code: 0 Recording: run_kubectl_results_tests Running command: run_kubectl_results_tests +++ Running case: test-cmd.run_kubectl_results_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_results_tests +++ [0513 22:29:42] Testing kubectl result output Successful: stdout for kubectl list (BSuccessful: stderr for kubectl list (Bresults.sh:45: Successful: kubectl list (BSuccessful: stdout for kubectl get pod/no-such-pod (BSuccessful: stderr for kubectl get pod/no-such-pod (Bresults.sh:54: Successful: kubectl get pod/no-such-pod (B+++ exit code: 0 Recording: run_kubectl_config_set_tests Running command: run_kubectl_config_set_tests +++ Running case: test-cmd.run_kubectl_config_set_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_config_set_tests +++ [0513 22:29:42] Creating namespace namespace-1652480982-23886 namespace/namespace-1652480982-23886 created Context "test" modified. +++ [0513 22:29:42] Testing kubectl(v1:config set) Cluster "test-cluster" set. Property "clusters.test-cluster.certificate-authority-data" set. Property "clusters.test-cluster.certificate-authority-data" set. +++ exit code: 0 Recording: run_kubectl_config_set_cluster_tests Running command: run_kubectl_config_set_cluster_tests +++ Running case: test-cmd.run_kubectl_config_set_cluster_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_config_set_cluster_tests +++ [0513 22:29:42] Creating namespace namespace-1652480982-11192 namespace/namespace-1652480982-11192 created Context "test" modified. +++ [0513 22:29:42] Testing kubectl config set-cluster Cluster "test-cluster-1" set. Cluster "test-cluster-2" set. Cluster "test-cluster-3" set. +++ exit code: 0 Recording: run_kubectl_config_set_credentials_tests Running command: run_kubectl_config_set_credentials_tests +++ Running case: test-cmd.run_kubectl_config_set_credentials_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_config_set_credentials_tests +++ [0513 22:29:43] Creating namespace namespace-1652480983-20082 namespace/namespace-1652480983-20082 created Context "test" modified. +++ [0513 22:29:43] Testing kubectl config set-credentials User "user1" set. User "user2" set. User "user3" set. +++ exit code: 0 Recording: run_kubectl_local_proxy_tests Running command: run_kubectl_local_proxy_tests +++ Running case: test-cmd.run_kubectl_local_proxy_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_local_proxy_tests +++ [0513 22:29:43] Testing kubectl local proxy +++ [0513 22:29:43] Starting kubectl proxy on random port; output file in proxy-port.out.WU5NB; args: +++ [0513 22:29:44] Attempt 0 to read proxy-port.out.WU5NB... +++ [0513 22:29:44] kubectl proxy running on port 44247 +++ [0513 22:29:44] On try 1, kubectl proxy: ok +++ [0513 22:29:44] Stopping proxy on port 44247 /home/prow/go/src/k8s.io/kubernetes/hack/lib/logging.sh: line 166: 57686 Killed kubectl proxy --port=0 --www=. > "${PROXY_PORT_FILE}" 2>&1 +++ [0513 22:29:44] Starting kubectl proxy on random port; output file in proxy-port.out.CNiTI; args: +++ [0513 22:29:44] Attempt 0 to read proxy-port.out.CNiTI... I0513 22:29:44.813868 56663 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: I0513 22:29:44.814015 56663 node_lifecycle_controller.go:1165] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0513 22:29:44.814051 56663 event.go:294] "Event occurred" object="127.0.0.1" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller" +++ [0513 22:29:44] kubectl proxy running on port 35399 +++ [0513 22:29:44] On try 1, kubectl proxy: ok +++ [0513 22:29:44] Stopping proxy on port 35399 /home/prow/go/src/k8s.io/kubernetes/hack/lib/logging.sh: line 166: 57723 Killed kubectl proxy --port=0 --www=. > "${PROXY_PORT_FILE}" 2>&1 +++ [0513 22:29:44] Starting kubectl proxy on random port; output file in proxy-port.out.Q9hxU; args: /custom +++ [0513 22:29:45] Attempt 0 to read proxy-port.out.Q9hxU... +++ [0513 22:29:45] kubectl proxy running on port 42917 +++ [0513 22:29:45] On try 1, kubectl proxy --api-prefix=/custom: Moved Permanently. +++ [0513 22:29:45] Stopping proxy on port 42917 +++ exit code: 0 Recording: run_RESTMapper_evaluation_tests Running command: run_RESTMapper_evaluation_tests +++ Running case: test-cmd.run_RESTMapper_evaluation_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_RESTMapper_evaluation_tests +++ [0513 22:29:45] Creating namespace namespace-1652480985-26870 namespace/namespace-1652480985-26870 created Context "test" modified. +++ [0513 22:29:45] Testing RESTMapper +++ [0513 22:29:46] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype" +++ exit code: 0 NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment configmap/kube-root-ca.crt serviceaccount/default Recording: run_clusterroles_tests Running command: run_clusterroles_tests +++ Running case: test-cmd.run_clusterroles_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_clusterroles_tests +++ [0513 22:29:53] Creating namespace namespace-1652480993-8368 namespace/namespace-1652480993-8368 created Context "test" modified. +++ [0513 22:29:53] Testing clusterroles rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin (Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin (BSuccessful (Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run) clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run) Successful (Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found clusterrole.rbac.authorization.k8s.io/pod-admin created rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *: (BSuccessful (Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "pod-admin" deleted has:warning: deleting cluster-scoped resources Successful (Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "pod-admin" deleted has:clusterrole.rbac.authorization.k8s.io "pod-admin" deleted clusterrole.rbac.authorization.k8s.io/pod-admin created rbac.sh:48: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *: (Brbac.sh:49: Successful get clusterrole/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:50: Successful get clusterrole/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Bclusterrole.rbac.authorization.k8s.io/resource-reader created rbac.sh:52: Successful get clusterrole/resource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:get:list: (Brbac.sh:53: Successful get clusterrole/resource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:deployments: (Brbac.sh:54: Successful get clusterrole/resource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :apps: (Bclusterrole.rbac.authorization.k8s.io/resourcename-reader created rbac.sh:56: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:57: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:58: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Brbac.sh:59: Successful get clusterrole/resourcename-reader {{range.rules}}{{range.resourceNames}}{{.}}:{{end}}{{end}}: foo: (Bclusterrole.rbac.authorization.k8s.io/url-reader created rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get: (Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*: (Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader (BSuccessful (Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run) clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run) Successful (Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found clusterrolebinding.rbac.authorization.k8s.io/super-admin created rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin: (Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run) clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run) rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin: (Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated rbac.sh:82: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo: (Bclusterrolebinding.rbac.authorization.k8s.io/multi-users created rbac.sh:84: Successful get clusterrolebinding/multi-users {{range.subjects}}{{.name}}:{{end}}: user-1:user-2: (Bclusterrolebinding.rbac.authorization.k8s.io/super-group created rbac.sh:87: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group: (Bclusterrolebinding.rbac.authorization.k8s.io/super-group subjects updated rbac.sh:89: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo: (Bclusterrolebinding.rbac.authorization.k8s.io/multi-groups created rbac.sh:91: Successful get clusterrolebinding/multi-groups {{range.subjects}}{{.name}}:{{end}}: group-1:group-2: (Bclusterrolebinding.rbac.authorization.k8s.io/super-sa created rbac.sh:94: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.namespace}}:{{end}}: otherns: (Brbac.sh:95: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name: (Bclusterrolebinding.rbac.authorization.k8s.io/super-sa subjects updated rbac.sh:97: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.namespace}}:{{end}}: otherns:otherfoo: (Brbac.sh:98: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo: (Bclusterrolebinding.rbac.authorization.k8s.io/cluster-admin subjects updated clusterrolebinding.rbac.authorization.k8s.io/multi-groups subjects updated clusterrolebinding.rbac.authorization.k8s.io/multi-users subjects updated clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated clusterrolebinding.rbac.authorization.k8s.io/super-group subjects updated clusterrolebinding.rbac.authorization.k8s.io/super-sa subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:basic-user subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:ephemeral-volume-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:root-ca-cert-publisher subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-after-finished-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:discovery subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:monitoring subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:node subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:service-account-issuer-discovery subjects updated clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler subjects updated rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user: (Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user: (Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user: (Brolebinding.rbac.authorization.k8s.io/admin created (dry run) rolebinding.rbac.authorization.k8s.io/admin created (server dry run) Successful (Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found has: not found rolebinding.rbac.authorization.k8s.io/admin created rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole (Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin: (Brolebinding.rbac.authorization.k8s.io/admin subjects updated rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo: (Brolebinding.rbac.authorization.k8s.io/localrole created rbac.sh:119: Successful get rolebinding/localrole {{.roleRef.kind}}: Role (Brbac.sh:120: Successful get rolebinding/localrole {{range.subjects}}{{.name}}:{{end}}: the-group: (Brolebinding.rbac.authorization.k8s.io/localrole subjects updated rbac.sh:122: Successful get rolebinding/localrole {{range.subjects}}{{.name}}:{{end}}: the-group:foo: (Brolebinding.rbac.authorization.k8s.io/sarole created rbac.sh:125: Successful get rolebinding/sarole {{range.subjects}}{{.namespace}}:{{end}}: otherns: (Brbac.sh:126: Successful get rolebinding/sarole {{range.subjects}}{{.name}}:{{end}}: sa-name: (Brolebinding.rbac.authorization.k8s.io/sarole subjects updated rbac.sh:128: Successful get rolebinding/sarole {{range.subjects}}{{.namespace}}:{{end}}: otherns:otherfoo: (Brbac.sh:129: Successful get rolebinding/sarole {{range.subjects}}{{.name}}:{{end}}: sa-name:foo: (Brolebinding.rbac.authorization.k8s.io/admin subjects updated rolebinding.rbac.authorization.k8s.io/localrole subjects updated rolebinding.rbac.authorization.k8s.io/sarole subjects updated rbac.sh:133: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:test-all-user: (Brbac.sh:134: Successful get rolebinding/localrole {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user: (Brbac.sh:135: Successful get rolebinding/sarole {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user: (Bquery for clusterrolebindings had limit param query for clusterrolebindings had user-specified limit param Successful describe clusterrolebindings verbose logs: I0513 22:30:00.438608 59372 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:00.443274 59372 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:30:00.476750 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500 200 OK in 4 milliseconds I0513 22:30:00.488826 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin 200 OK in 1 milliseconds I0513 22:30:00.490567 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multi-groups 200 OK in 1 milliseconds I0513 22:30:00.491991 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/multi-users 200 OK in 1 milliseconds I0513 22:30:00.493396 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/super-admin 200 OK in 1 milliseconds I0513 22:30:00.494965 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/super-group 200 OK in 1 milliseconds I0513 22:30:00.497584 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/super-sa 200 OK in 1 milliseconds I0513 22:30:00.499156 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user 200 OK in 1 milliseconds I0513 22:30:00.500703 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller 200 OK in 1 milliseconds I0513 22:30:00.502150 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller 200 OK in 1 milliseconds I0513 22:30:00.503698 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller 200 OK in 1 milliseconds I0513 22:30:00.505258 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller 200 OK in 1 milliseconds I0513 22:30:00.506924 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller 200 OK in 1 milliseconds I0513 22:30:00.508338 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller 200 OK in 0 milliseconds I0513 22:30:00.509777 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller 200 OK in 1 milliseconds I0513 22:30:00.511237 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller 200 OK in 1 milliseconds I0513 22:30:00.512711 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller 200 OK in 1 milliseconds I0513 22:30:00.514214 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslicemirroring-controller 200 OK in 1 milliseconds I0513 22:30:00.515721 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ephemeral-volume-controller 200 OK in 1 milliseconds I0513 22:30:00.517242 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller 200 OK in 1 milliseconds I0513 22:30:00.518799 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector 200 OK in 1 milliseconds I0513 22:30:00.520348 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler 200 OK in 1 milliseconds I0513 22:30:00.521836 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller 200 OK in 1 milliseconds I0513 22:30:00.523327 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller 200 OK in 1 milliseconds I0513 22:30:00.524951 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller 200 OK in 1 milliseconds I0513 22:30:00.526458 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder 200 OK in 1 milliseconds I0513 22:30:00.528144 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector 200 OK in 1 milliseconds I0513 22:30:00.529547 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller 200 OK in 1 milliseconds I0513 22:30:00.531142 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller 200 OK in 1 milliseconds I0513 22:30:00.533274 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller 200 OK in 1 milliseconds I0513 22:30:00.534787 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller 200 OK in 1 milliseconds I0513 22:30:00.536768 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller 200 OK in 1 milliseconds I0513 22:30:00.538616 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:root-ca-cert-publisher 200 OK in 1 milliseconds I0513 22:30:00.540102 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller 200 OK in 1 milliseconds I0513 22:30:00.541514 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller 200 OK in 1 milliseconds I0513 22:30:00.542931 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller 200 OK in 1 milliseconds I0513 22:30:00.544335 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller 200 OK in 0 milliseconds I0513 22:30:00.545810 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-after-finished-controller 200 OK in 1 milliseconds I0513 22:30:00.547479 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller 200 OK in 1 milliseconds I0513 22:30:00.548900 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery 200 OK in 1 milliseconds I0513 22:30:00.550601 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager 200 OK in 1 milliseconds I0513 22:30:00.552159 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns 200 OK in 1 milliseconds I0513 22:30:00.553660 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler 200 OK in 1 milliseconds I0513 22:30:00.555148 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:monitoring 200 OK in 1 milliseconds I0513 22:30:00.556789 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node 200 OK in 1 milliseconds I0513 22:30:00.558370 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier 200 OK in 1 milliseconds I0513 22:30:00.559675 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer 200 OK in 0 milliseconds I0513 22:30:00.561083 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:service-account-issuer-discovery 200 OK in 1 milliseconds I0513 22:30:00.562580 59372 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler 200 OK in 1 milliseconds (Bquery for clusterroles had limit param query for clusterroles had user-specified limit param Successful describe clusterroles verbose logs: I0513 22:30:00.774365 59399 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:00.779549 59399 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:30:00.809772 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500 200 OK in 6 milliseconds I0513 22:30:00.827974 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/admin 200 OK in 1 milliseconds I0513 22:30:00.832699 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/aggregation-reader 200 OK in 1 milliseconds I0513 22:30:00.834250 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin 200 OK in 1 milliseconds I0513 22:30:00.836159 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/edit 200 OK in 1 milliseconds I0513 22:30:00.840943 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/pod-admin 200 OK in 1 milliseconds I0513 22:30:00.842392 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/resource-reader 200 OK in 1 milliseconds I0513 22:30:00.843895 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/resourcename-reader 200 OK in 1 milliseconds I0513 22:30:00.845971 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin 200 OK in 1 milliseconds I0513 22:30:00.847710 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit 200 OK in 1 milliseconds I0513 22:30:00.850684 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view 200 OK in 1 milliseconds I0513 22:30:00.853839 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator 200 OK in 1 milliseconds I0513 22:30:00.855464 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user 200 OK in 1 milliseconds I0513 22:30:00.857015 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient 200 OK in 1 milliseconds I0513 22:30:00.858619 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 200 OK in 1 milliseconds I0513 22:30:00.860113 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver 200 OK in 1 milliseconds I0513 22:30:00.861473 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 200 OK in 0 milliseconds I0513 22:30:00.862976 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver 200 OK in 1 milliseconds I0513 22:30:00.864357 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver 200 OK in 1 milliseconds I0513 22:30:00.865851 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller 200 OK in 1 milliseconds I0513 22:30:00.867572 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller 200 OK in 1 milliseconds I0513 22:30:00.869155 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller 200 OK in 1 milliseconds I0513 22:30:00.870665 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller 200 OK in 1 milliseconds I0513 22:30:00.872349 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller 200 OK in 1 milliseconds I0513 22:30:00.874343 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller 200 OK in 1 milliseconds I0513 22:30:00.876438 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller 200 OK in 1 milliseconds I0513 22:30:00.879519 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller 200 OK in 1 milliseconds I0513 22:30:00.881019 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller 200 OK in 1 milliseconds I0513 22:30:00.882655 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslicemirroring-controller 200 OK in 1 milliseconds I0513 22:30:00.884135 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ephemeral-volume-controller 200 OK in 0 milliseconds I0513 22:30:00.885524 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller 200 OK in 0 milliseconds I0513 22:30:00.887325 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector 200 OK in 1 milliseconds I0513 22:30:00.889073 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler 200 OK in 1 milliseconds I0513 22:30:00.890928 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller 200 OK in 1 milliseconds I0513 22:30:00.892382 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller 200 OK in 0 milliseconds I0513 22:30:00.893855 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller 200 OK in 1 milliseconds I0513 22:30:00.895304 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder 200 OK in 0 milliseconds I0513 22:30:00.897183 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector 200 OK in 1 milliseconds I0513 22:30:00.898632 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller 200 OK in 1 milliseconds I0513 22:30:00.900178 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller 200 OK in 1 milliseconds I0513 22:30:00.901682 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller 200 OK in 1 milliseconds I0513 22:30:00.903277 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller 200 OK in 1 milliseconds I0513 22:30:00.904889 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller 200 OK in 1 milliseconds I0513 22:30:00.906372 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:root-ca-cert-publisher 200 OK in 1 milliseconds I0513 22:30:00.908237 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller 200 OK in 1 milliseconds I0513 22:30:00.909712 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller 200 OK in 1 milliseconds I0513 22:30:00.911725 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller 200 OK in 1 milliseconds I0513 22:30:00.913471 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller 200 OK in 0 milliseconds I0513 22:30:00.915253 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-after-finished-controller 200 OK in 1 milliseconds I0513 22:30:00.916866 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller 200 OK in 1 milliseconds I0513 22:30:00.918377 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery 200 OK in 1 milliseconds I0513 22:30:00.919920 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster 200 OK in 1 milliseconds I0513 22:30:00.921354 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator 200 OK in 1 milliseconds I0513 22:30:00.922780 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager 200 OK in 1 milliseconds I0513 22:30:00.924399 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns 200 OK in 0 milliseconds I0513 22:30:00.925999 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler 200 OK in 1 milliseconds I0513 22:30:00.928238 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin 200 OK in 1 milliseconds I0513 22:30:00.929816 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:monitoring 200 OK in 1 milliseconds I0513 22:30:00.931581 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node 200 OK in 1 milliseconds I0513 22:30:00.934090 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper 200 OK in 0 milliseconds I0513 22:30:00.935469 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector 200 OK in 1 milliseconds I0513 22:30:00.937063 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier 200 OK in 1 milliseconds I0513 22:30:00.938615 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner 200 OK in 1 milliseconds I0513 22:30:00.940203 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer 200 OK in 1 milliseconds I0513 22:30:00.941612 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:service-account-issuer-discovery 200 OK in 1 milliseconds I0513 22:30:00.943073 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler 200 OK in 1 milliseconds I0513 22:30:00.944425 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/url-reader 200 OK in 0 milliseconds I0513 22:30:00.946111 59399 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles/view 200 OK in 1 milliseconds (B+++ exit code: 0 Recording: run_role_tests Running command: run_role_tests +++ Running case: test-cmd.run_role_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_role_tests +++ [0513 22:30:01] Creating namespace namespace-1652481001-2375 namespace/namespace-1652481001-2375 created Context "test" modified. +++ [0513 22:30:01] Testing role role.rbac.authorization.k8s.io/pod-admin created (dry run) role.rbac.authorization.k8s.io/pod-admin created (server dry run) Successful (Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found has: not found role.rbac.authorization.k8s.io/pod-admin created rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *: (Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (BSuccessful (Bmessage:the server doesn't have a resource type "invalid-resource" has:the server doesn't have a resource type "invalid-resource" role.rbac.authorization.k8s.io/group-reader created rbac.sh:166: Successful get role/group-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:167: Successful get role/group-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: deployments: (Brbac.sh:168: Successful get role/group-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: apps: (BSuccessful (Bmessage:the server doesn't have a resource type "deployments" in group "invalid-group" has:the server doesn't have a resource type "deployments" in group "invalid-group" role.rbac.authorization.k8s.io/subresource-reader created rbac.sh:173: Successful get role/subresource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:174: Successful get role/subresource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods/status: (Brbac.sh:175: Successful get role/subresource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Brole.rbac.authorization.k8s.io/group-subresource-reader created rbac.sh:178: Successful get role/group-subresource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:179: Successful get role/group-subresource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: replicasets/scale: (Brbac.sh:180: Successful get role/group-subresource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: apps: (BSuccessful (Bmessage:the server doesn't have a resource type "rs" in group "invalid-group" has:the server doesn't have a resource type "rs" in group "invalid-group" role.rbac.authorization.k8s.io/resourcename-reader created rbac.sh:185: Successful get role/resourcename-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list: (Brbac.sh:186: Successful get role/resourcename-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods: (Brbac.sh:187: Successful get role/resourcename-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: : (Brbac.sh:188: Successful get role/resourcename-reader {{range.rules}}{{range.resourceNames}}{{.}}:{{end}}{{end}}: foo: (Brole.rbac.authorization.k8s.io/resource-reader created rbac.sh:191: Successful get role/resource-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:list:get:list: (Brbac.sh:192: Successful get role/resource-reader {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods/status:deployments: (Brbac.sh:193: Successful get role/resource-reader {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :apps: (Bquery for roles had limit param query for roles had user-specified limit param Successful describe roles verbose logs: I0513 22:30:04.758285 59944 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:04.763247 59944 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:30:04.861510 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles?limit=500 200 OK in 1 milliseconds I0513 22:30:04.864660 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles/group-reader 200 OK in 1 milliseconds I0513 22:30:04.866553 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles/group-subresource-reader 200 OK in 1 milliseconds I0513 22:30:04.868417 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles/pod-admin 200 OK in 1 milliseconds I0513 22:30:04.870200 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles/resource-reader 200 OK in 1 milliseconds I0513 22:30:04.871826 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles/resourcename-reader 200 OK in 1 milliseconds I0513 22:30:04.873611 59944 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/roles/subresource-reader 200 OK in 1 milliseconds (Bquery for rolebindings had limit param query for rolebindings had user-specified limit param Successful describe rolebindings verbose logs: I0513 22:30:05.005340 59969 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:05.010560 59969 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:30:05.033067 59969 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/namespaces/namespace-1652481001-2375/rolebindings?limit=500 200 OK in 1 milliseconds No resources found in namespace-1652481001-2375 namespace. (B+++ exit code: 0 Recording: run_assert_short_name_tests Running command: run_assert_short_name_tests +++ Running case: test-cmd.run_assert_short_name_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_assert_short_name_tests +++ [0513 22:30:05] Creating namespace namespace-1652481005-3273 namespace/namespace-1652481005-3273 created Context "test" modified. +++ [0513 22:30:05] Testing assert short name +++ [0513 22:30:05] Testing propagation of short names for resources Successful (Bmessage:{"kind":"APIResourceList","groupVersion":"v1","resources":[{"name":"bindings","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"componentstatuses","singularName":"","namespaced":false,"kind":"ComponentStatus","verbs":["get","list"],"shortNames":["cs"]},{"name":"configmaps","singularName":"","namespaced":true,"kind":"ConfigMap","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["cm"],"storageVersionHash":"qFsyl6wFWjQ="},{"name":"endpoints","singularName":"","namespaced":true,"kind":"Endpoints","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ep"],"storageVersionHash":"fWeeMqaN/OA="},{"name":"events","singularName":"","namespaced":true,"kind":"Event","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ev"],"storageVersionHash":"r2yiGXH7wu8="},{"name":"limitranges","singularName":"","namespaced":true,"kind":"LimitRange","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["limits"],"storageVersionHash":"EBKMFVe6cwo="},{"name":"namespaces","singularName":"","namespaced":false,"kind":"Namespace","verbs":["create","delete","get","list","patch","update","watch"],"shortNames":["ns"],"storageVersionHash":"Q3oi5N2YM8M="},{"name":"namespaces/finalize","singularName":"","namespaced":false,"kind":"Namespace","verbs":["update"]},{"name":"namespaces/status","singularName":"","namespaced":false,"kind":"Namespace","verbs":["get","patch","update"]},{"name":"nodes","singularName":"","namespaced":false,"kind":"Node","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["no"],"storageVersionHash":"XwShjMxG9Fs="},{"name":"nodes/proxy","singularName":"","namespaced":false,"kind":"NodeProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"nodes/status","singularName":"","namespaced":false,"kind":"Node","verbs":["get","patch","update"]},{"name":"persistentvolumeclaims","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pvc"],"storageVersionHash":"QWTyNDq0dC4="},{"name":"persistentvolumeclaims/status","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["get","patch","update"]},{"name":"persistentvolumes","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pv"],"storageVersionHash":"HN/zwEC+JgM="},{"name":"persistentvolumes/status","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["get","patch","update"]},{"name":"pods","singularName":"","namespaced":true,"kind":"Pod","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["po"],"categories":["all"],"storageVersionHash":"xPOwRZ+Yhw8="},{"name":"pods/attach","singularName":"","namespaced":true,"kind":"PodAttachOptions","verbs":["create","get"]},{"name":"pods/binding","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"pods/ephemeralcontainers","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"pods/eviction","singularName":"","namespaced":true,"group":"policy","version":"v1","kind":"Eviction","verbs":["create"]},{"name":"pods/exec","singularName":"","namespaced":true,"kind":"PodExecOptions","verbs":["create","get"]},{"name":"pods/log","singularName":"","namespaced":true,"kind":"Pod","verbs":["get"]},{"name":"pods/portforward","singularName":"","namespaced":true,"kind":"PodPortForwardOptions","verbs":["create","get"]},{"name":"pods/proxy","singularName":"","namespaced":true,"kind":"PodProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"pods/status","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"podtemplates","singularName":"","namespaced":true,"kind":"PodTemplate","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"LIXB2x4IFpk="},{"name":"replicationcontrollers","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["rc"],"categories":["all"],"storageVersionHash":"Jond2If31h0="},{"name":"replicationcontrollers/scale","singularName":"","namespaced":true,"group":"autoscaling","version":"v1","kind":"Scale","verbs":["get","patch","update"]},{"name":"replicationcontrollers/status","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["get","patch","update"]},{"name":"resourcequotas","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["quota"],"storageVersionHash":"8uhSgffRX6w="},{"name":"resourcequotas/status","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["get","patch","update"]},{"name":"secrets","singularName":"","namespaced":true,"kind":"Secret","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"S6u1pOWzb84="},{"name":"serviceaccounts","singularName":"","namespaced":true,"kind":"ServiceAccount","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["sa"],"storageVersionHash":"pbx9ZvyFpBE="},{"name":"serviceaccounts/token","singularName":"","namespaced":true,"group":"authentication.k8s.io","version":"v1","kind":"TokenRequest","verbs":["create"]},{"name":"services","singularName":"","namespaced":true,"kind":"Service","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["svc"],"categories":["all"],"storageVersionHash":"0/CO1lhkEBI="},{"name":"services/proxy","singularName":"","namespaced":true,"kind":"ServiceProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"services/status","singularName":"","namespaced":true,"kind":"Service","verbs":["get","patch","update"]}]} has:{"name":"configmaps","singularName":"","namespaced":true,"kind":"ConfigMap","verbs":\["create","delete","deletecollection","get","list","patch","update","watch"\],"shortNames":\["cm"\],"storageVersionHash": +++ exit code: 0 Recording: run_assert_categories_tests Running command: run_assert_categories_tests +++ Running case: test-cmd.run_assert_categories_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_assert_categories_tests +++ [0513 22:30:05] Testing propagation of categories for resources Successful (Bmessage:"name":"pods","singularName":"","namespaced":true,"kind":"Pod","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["po"],"categories":["all"],"storageVersionHash":"xPOwRZ+Yhw8="} has:"categories":\["all"\] +++ exit code: 0 Recording: run_pod_tests Running command: run_pod_tests +++ Running case: test-cmd.run_pod_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_pod_tests +++ [0513 22:30:05] Testing kubectl(v1:pods) +++ [0513 22:30:05] Creating namespace namespace-1652481005-22225 namespace/namespace-1652481005-22225 created Context "test" modified. core.sh:76: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2022-05-13T22:30:05Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1652481005-22225", "resourceVersion": "338", "uid": "fcf99a0a-09e0-4611-bc6b-18d2a8c4e4b7" }, "spec": { "containers": [ { "image": "k8s.gcr.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } core.sh:81: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:82: Successful get pod valid-pod {{.metadata.name}}: valid-pod (Bcore.sh:83: Successful get pod/valid-pod {{.metadata.name}}: valid-pod (Bcore.sh:84: Successful get pods/valid-pod {{.metadata.name}}: valid-pod (BSuccessful (Bmessage:kubectl-create has:kubectl-create core.sh:89: Successful get pods {.items[*].metadata.name}: valid-pod (Bcore.sh:90: Successful get pod valid-pod {.metadata.name}: valid-pod (Bcore.sh:91: Successful get pod/valid-pod {.metadata.name}: valid-pod (Bcore.sh:92: Successful get pods/valid-pod {.metadata.name}: valid-pod (Bmatched Name: matched Image: matched Node: matched Labels: matched Status: core.sh:94: Successful describe pods valid-pod: Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (Bcore.sh:96: Successful describe Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (B core.sh:98: Successful describe Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: (B core.sh:100: Successful describe Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (B matched Name: matched Image: matched Node: matched Labels: matched Status: Successful describe pods: Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (BSuccessful describe Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (BSuccessful describe Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: (BSuccessful describe Name: valid-pod Namespace: namespace-1652481005-22225 Priority: 0 Node: Labels: name=valid-pod Annotations: Status: Pending IP: IPs: Containers: kubernetes-serve-hostname: Image: k8s.gcr.io/serve_hostname Port: Host Port: Limits: cpu: 1 memory: 512Mi Requests: cpu: 1 memory: 512Mi Environment: Mounts: Volumes: QoS Class: Guaranteed Node-Selectors: Tolerations: Events: (Bquery for pods had limit param query for events had limit param query for pods had user-specified limit param Successful describe pods verbose logs: I0513 22:30:07.242664 60537 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:07.249292 60537 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 6 milliseconds I0513 22:30:07.274311 60537 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-22225/pods?limit=500 200 OK in 1 milliseconds I0513 22:30:07.276390 60537 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-22225/pods/valid-pod 200 OK in 1 milliseconds I0513 22:30:07.279231 60537 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-22225/events?fieldSelector=involvedObject.name%3Dvalid-pod%2CinvolvedObject.namespace%3Dnamespace-1652481005-22225%2CinvolvedObject.uid%3Dfcf99a0a-09e0-4611-bc6b-18d2a8c4e4b7&limit=500 200 OK in 1 milliseconds (Bcore.sh:118: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:122: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:127: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bpod "valid-pod" deleted core.sh:131: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:136: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bpod "valid-pod" deleted core.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:30:08] Creating namespace namespace-1652481008-14417 namespace/namespace-1652481008-14417 created Context "test" modified. core.sh:145: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:149: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:30:09] Creating namespace namespace-1652481009-28266 namespace/namespace-1652481009-28266 created Context "test" modified. core.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:166: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:170: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:174: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:30:09] Creating namespace namespace-1652481009-4588 namespace/namespace-1652481009-4588 created Context "test" modified. core.sh:179: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:183: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:valid-pod Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:valid-pod core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Berror: resource(s) were provided, but no name was specified core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Berror: setting 'all' parameter but found a non empty selector. core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: : (Bnamespace/test-kubectl-describe-pod created core.sh:223: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod (Bcore.sh:227: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created (dry run) secret/test-secret created (server dry run) core.sh:231: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:235: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-secret (Bcore.sh:236: Successful get secret/test-secret --namespace=test-kubectl-describe-pod {{.type}}: test-type (Bcore.sh:241: Successful get configmaps --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: : (Bconfigmap/test-configmap created core.sh:247: Successful get configmap/test-configmap --namespace=test-kubectl-describe-pod {{.metadata.name}}: test-configmap (Bcore.sh:251: Successful get pdb --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name \"test-pdb-1\" }}found{{end}}{{end}}:: : (Bpoddisruptionbudget.policy/test-pdb-1 created (dry run) I0513 22:30:12.593131 53075 controller.go:611] quota admission added evaluator for: poddisruptionbudgets.policy poddisruptionbudget.policy/test-pdb-1 created (server dry run) core.sh:255: Successful get pdb --namespace=test-kubectl-describe-pod {{range.items}}{{ if eq .metadata.name \"test-pdb-1\" }}found{{end}}{{end}}:: : (Bpoddisruptionbudget.policy/test-pdb-1 created core.sh:259: Successful get pdb/test-pdb-1 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 2 (Bpoddisruptionbudget.policy/test-pdb-2 created core.sh:263: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50% (Bquery for poddisruptionbudgets had limit param query for events had limit param query for poddisruptionbudgets had user-specified limit param Successful describe poddisruptionbudgets verbose logs: I0513 22:30:12.970610 61455 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:12.975505 61455 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:30:12.997541 61455 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets?limit=500 200 OK in 1 milliseconds I0513 22:30:13.000359 61455 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-1 200 OK in 1 milliseconds I0513 22:30:13.002412 61455 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3De6e2832d-e785-4c52-accf-8bced8683fee%2CinvolvedObject.name%3Dtest-pdb-1&limit=500 200 OK in 1 milliseconds I0513 22:30:13.004373 61455 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds I0513 22:30:13.005969 61455 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3Dae478c3a-5117-4fdd-85ba-0a5babf71a7b%2CinvolvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod&limit=500 200 OK in 1 milliseconds (Bpoddisruptionbudget.policy/test-pdb-3 created core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2 (Bpoddisruptionbudget.policy/test-pdb-4 created core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50% (Berror: min-available and max-unavailable cannot be both specified core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/env-test-pod created matched TEST_CMD_1 matched matched TEST_CMD_2 matched matched TEST_CMD_3 matched env-test-pod (v1:metadata.name) core.sh:284: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod: Name: env-test-pod Namespace: test-kubectl-describe-pod Priority: 0 Node: Labels: Annotations: Status: Pending IP: IPs: Containers: test-container: Image: k8s.gcr.io/busybox Port: Host Port: Command: /bin/sh -c env Environment: TEST_CMD_1: Optional: false TEST_CMD_2: Optional: false TEST_CMD_3: env-test-pod (v1:metadata.name) Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: (Bmatched TEST_CMD_1 matched matched TEST_CMD_2 matched matched TEST_CMD_3 matched env-test-pod (v1:metadata.name) Successful describe pods --namespace=test-kubectl-describe-pod: Name: env-test-pod Namespace: test-kubectl-describe-pod Priority: 0 Node: Labels: Annotations: Status: Pending IP: IPs: Containers: test-container: Image: k8s.gcr.io/busybox Port: Host Port: Command: /bin/sh -c env Environment: TEST_CMD_1: Optional: false TEST_CMD_2: Optional: false TEST_CMD_3: env-test-pod (v1:metadata.name) Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: (Bpod "env-test-pod" deleted secret "test-secret" deleted configmap "test-configmap" deleted poddisruptionbudget.policy "test-pdb-1" deleted poddisruptionbudget.policy "test-pdb-2" deleted poddisruptionbudget.policy "test-pdb-3" deleted poddisruptionbudget.policy "test-pdb-4" deleted namespace "test-kubectl-describe-pod" deleted core.sh:296: Successful get priorityclasses {{range.items}}{{ if eq .metadata.name \"test-priorityclass\" }}found{{end}}{{end}}:: : (Bpriorityclass.scheduling.k8s.io/test-priorityclass created (dry run) priorityclass.scheduling.k8s.io/test-priorityclass created (server dry run) core.sh:300: Successful get priorityclasses {{range.items}}{{ if eq .metadata.name \"test-priorityclass\" }}found{{end}}{{end}}:: : (Bpriorityclass.scheduling.k8s.io/test-priorityclass created core.sh:303: Successful get priorityclasses {{range.items}}{{ if eq .metadata.name \"test-priorityclass\" }}found{{end}}{{end}}:: found: (Bquery for priorityclasses had limit param query for events had limit param query for priorityclasses had user-specified limit param Successful describe priorityclasses verbose logs: I0513 22:30:19.828270 61736 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:30:19.832988 61736 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:30:19.857149 61736 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses?limit=500 200 OK in 1 milliseconds I0513 22:30:19.860062 61736 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical 200 OK in 1 milliseconds I0513 22:30:19.862238 61736 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dsystem-cluster-critical%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPriorityClass%2CinvolvedObject.uid%3De933a705-fe81-4609-bec6-69a2f3bf683d&limit=500 200 OK in 1 milliseconds I0513 22:30:19.864352 61736 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical 200 OK in 1 milliseconds I0513 22:30:19.865961 61736 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dsystem-node-critical%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPriorityClass%2CinvolvedObject.uid%3Df08c5607-07f6-4c14-9924-8551983b4e36&limit=500 200 OK in 1 milliseconds I0513 22:30:19.867505 61736 round_trippers.go:553] GET https://127.0.0.1:6443/apis/scheduling.k8s.io/v1/priorityclasses/test-priorityclass 200 OK in 1 milliseconds I0513 22:30:19.869109 61736 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dtest-priorityclass%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPriorityClass%2CinvolvedObject.uid%3D54e09a4e-407c-4ae4-a2bf-360b566ecff9&limit=500 200 OK in 1 milliseconds (Bpriorityclass.scheduling.k8s.io "test-priorityclass" deleted +++ [0513 22:30:20] Creating namespace namespace-1652481020-13892 namespace/namespace-1652481020-13892 created Context "test" modified. core.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created pod/agnhost-primary created core.sh:316: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: agnhost-primary:valid-pod: (Bcore.sh:320: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: agnhost-primary:valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted pod "agnhost-primary" force deleted core.sh:324: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:30:20] Creating namespace namespace-1652481020-21605 namespace/namespace-1652481020-21605 created Context "test" modified. core.sh:329: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:333: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:337: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod: (Bpod/valid-pod labeled pod/valid-pod labeled core.sh:342: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod: (Bcore.sh:346: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod: (Bpod/valid-pod labeled core.sh:350: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod: (Bcore.sh:354: Successful get pod valid-pod {{range.metadata.labels}}{{.}}:{{end}}: valid-pod:new-valid-pod: (Bpod/valid-pod labeled core.sh:358: Successful get pod valid-pod {{.metadata.labels.emptylabel}}: (Bcore.sh:362: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (Bpod/valid-pod annotated (dry run) pod/valid-pod annotated (server dry run) core.sh:367: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (Bcore.sh:371: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (Bpod/valid-pod annotated core.sh:375: Successful get pod valid-pod {{.metadata.annotations.emptyannotation}}: (BSuccessful (Bmessage:kubectl-create kubectl-annotate kubectl-label has:kubectl-annotate core.sh:382: Successful get pod valid-pod {{range.items}}{{.metadata.annotations}}:{{end}}: (BFlag --record has been deprecated, --record will be removed in the future pod/valid-pod labeled core.sh:386: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod record-change=true --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true: (BSuccessful (Bmessage:kubectl-create kubectl-annotate kubectl-label has:kubectl-label Flag --record has been deprecated, --record will be removed in the future pod/valid-pod labeled core.sh:395: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod record-change=true --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true: (BFlag --record has been deprecated, --record will be removed in the future pod/valid-pod labeled core.sh:402: Successful get pod valid-pod {{range.metadata.annotations}}{{.}}:{{end}}: :kubectl label pods valid-pod new-record-change=true --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true: (Bcore.sh:407: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:411: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/pod-with-precision created core.sh:419: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: pod-with-precision: (Bpod/pod-with-precision patched core.sh:425: Successful get pod pod-with-precision {{.metadata.annotations.patchkey}}: patchvalue (BI0513 22:30:24.311370 56663 namespace_controller.go:185] Namespace has been deleted test-kubectl-describe-pod pod/pod-with-precision labeled core.sh:429: Successful get pod pod-with-precision {{.metadata.labels.labelkey}}: labelvalue (Bpod/pod-with-precision annotated core.sh:433: Successful get pod pod-with-precision {{.metadata.annotations.annotatekey}}: annotatevalue (Bpod "pod-with-precision" deleted pod/test-pod created pod/test-pod annotated core.sh:443: Successful get pod test-pod {{.metadata.annotations.annotatekey}}: annotatevalue (BapiVersion: v1 kind: Pod metadata: annotations: annotatekey: localvalue labels: name: test-pod-label name: test-pod spec: containers: - image: k8s.gcr.io/pause:3.7 name: kubernetes-pause core.sh:450: Successful get pod test-pod {{.metadata.annotations.annotatekey}}: annotatevalue (BSuccessful (Bmessage:apiVersion: v1 kind: Pod metadata: annotations: annotatekey: localvalue labels: name: test-pod-label name: test-pod spec: containers: - image: k8s.gcr.io/pause:3.7 name: kubernetes-pause has:localvalue pod "test-pod" deleted core.sh:458: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:459: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:30:25.579096 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481020-21605/modified" clusterIPs=map[IPv4:10.0.0.181] service/modified created replicationcontroller/modified created I0513 22:30:25.599455 56663 event.go:294] "Event occurred" object="namespace-1652481020-21605/modified" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: modified-hxn9z" core.sh:467: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: modified: (Bcore.sh:468: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: modified: (BSuccessful (Bmessage:kubectl-create has:kubectl-create Successful (Bmessage:kube-controller-manager kubectl-create has:kubectl-create service "modified" deleted replicationcontroller "modified" deleted core.sh:479: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:480: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:30:26.423676 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481020-21605/modified" clusterIPs=map[IPv4:10.0.0.137] service/modified created replicationcontroller/modified created I0513 22:30:26.475457 56663 event.go:294] "Event occurred" object="namespace-1652481020-21605/modified" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: modified-gzj2k" core.sh:484: Successful get service {{range.items}}{{.metadata.name}}:{{end}}: modified: (Bcore.sh:485: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: modified: (Bservice "modified" deleted replicationcontroller "modified" deleted core.sh:496: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:500: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:The request is invalid: patch: Invalid value: "map[metadata:map[labels:invalid]]": cannot restore map from string has:cannot restore map from string Successful (Bmessage:pod/valid-pod patched (no change) has:patched (no change) Flag --record has been deprecated, --record will be removed in the future pod/valid-pod patched core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (Bcore.sh:519: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]: (Bpod/valid-pod patched core.sh:523: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2: (Bpod/valid-pod patched core.sh:527: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (BFlag --record has been deprecated, --record will be removed in the future pod/valid-pod patched Flag --record has been deprecated, --record will be removed in the future pod/valid-pod patched core.sh:532: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (Bpod/valid-pod patched core.sh:537: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml: (Bpod/valid-pod patched core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.7: (BSuccessful (Bmessage:kubectl-create kubectl-patch has:kubectl-patch pod/valid-pod patched core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (B+++ [0513 22:30:29] "kubectl patch with resourceVersion 591" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again pod "valid-pod" deleted pod/valid-pod replaced core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname (BSuccessful (Bmessage:kubectl-replace has:kubectl-replace Successful (Bmessage:error: --grace-period must have --force specified has:\-\-grace-period must have \-\-force specified Successful (Bmessage:error: --timeout must have --force specified has:\-\-timeout must have \-\-force specified node/node-v1-test created W0513 22:30:30.295854 56663 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: : (Bnode/node-v1-test replaced (server dry run) node/node-v1-test replaced (dry run) core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: : (Bnode/node-v1-test replaced core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b (Bnode "node-v1-test" deleted core.sh:662: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx: (Bcore.sh:665: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname: (BSuccessful (Bmessage:kubectl-replace kubectl-edit has:kubectl-edit Edit cancelled, no changes made. Edit cancelled, no changes made. Edit cancelled, no changes made. Edit cancelled, no changes made. core.sh:681: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (BapiVersion: v1 kind: Pod metadata: labels: name: localonlyvalue name: test-pod spec: containers: - image: k8s.gcr.io/pause:3.7 name: kubernetes-pause core.sh:686: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (BSuccessful (Bmessage:apiVersion: v1 kind: Pod metadata: labels: name: localonlyvalue name: test-pod spec: containers: - image: k8s.gcr.io/pause:3.7 name: kubernetes-pause has:localonlyvalue core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (Berror: 'name' already has a value (valid-pod), and --overwrite is false core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod (Bpod/valid-pod labeled core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan (Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:711: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:30:33] Creating namespace namespace-1652481033-24918 namespace/namespace-1652481033-24918 created Context "test" modified. core.sh:716: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/redis-master created pod/valid-pod created core.sh:720: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod: (Bcore.sh:724: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod: (Bpod "redis-master" deleted pod "valid-pod" deleted core.sh:728: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:30:33] Creating namespace namespace-1652481033-5199 namespace/namespace-1652481033-5199 created Context "test" modified. core.sh:734: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created core.sh:738: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (Bpod/test-pod replaced core.sh:746: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-replaced (BWarning: resource pods/test-pod is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/test-pod configured core.sh:753: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-applied (Bpod/test-pod replaced core.sh:762: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-replaced (Bpod "test-pod" deleted +++ exit code: 0 Recording: run_save_config_tests Running command: run_save_config_tests +++ Running case: test-cmd.run_save_config_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_save_config_tests +++ [0513 22:30:35] Testing kubectl --save-config +++ [0513 22:30:35] Creating namespace namespace-1652481035-6580 namespace/namespace-1652481035-6580 created Context "test" modified. save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod "test-pod" deleted +++ [0513 22:30:36] Creating namespace namespace-1652481036-9652 namespace/namespace-1652481036-9652 created Context "test" modified. save-config.sh:41: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod/test-pod edited pod "test-pod" deleted +++ [0513 22:30:37] Creating namespace namespace-1652481037-29023 namespace/namespace-1652481037-29023 created Context "test" modified. save-config.sh:56: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod/test-pod replaced pod "test-pod" deleted save-config.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/nginx created save-config.sh:74: Successful get svc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:30:38.351469 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481037-29023/nginx" clusterIPs=map[IPv4:10.0.0.90] service/nginx exposed pod "nginx" deleted service "nginx" deleted save-config.sh:83: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0513 22:30:38.803603 56663 event.go:294] "Event occurred" object="namespace-1652481037-29023/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-sj76s" I0513 22:30:38.811303 56663 event.go:294] "Event occurred" object="namespace-1652481037-29023/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-tw5xm" I0513 22:30:38.811336 56663 event.go:294] "Event occurred" object="namespace-1652481037-29023/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-928r7" I0513 22:30:38.946976 53075 controller.go:611] quota admission added evaluator for: horizontalpodautoscalers.autoscaling horizontalpodautoscaler.autoscaling/frontend autoscaled Successful (Bmessage:autoscaling/v2 has:autoscaling/v2 Successful (Bmessage:autoscaling/v2 has:autoscaling/v2 Successful (Bmessage:autoscaling/v2 has:autoscaling/v2 horizontalpodautoscaler.autoscaling "frontend" deleted replicationcontroller "frontend" deleted +++ exit code: 0 Recording: run_kubectl_create_error_tests Running command: run_kubectl_create_error_tests +++ Running case: test-cmd.run_kubectl_create_error_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_error_tests +++ [0513 22:30:39] Creating namespace namespace-1652481039-21952 namespace/namespace-1652481039-21952 created Context "test" modified. +++ [0513 22:30:39] Testing kubectl create with error Error: must specify one of -f and -k Create a resource from a file or from stdin. JSON and YAML formats are accepted. Examples: # Create a pod using the data in pod.json kubectl create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | kubectl create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data kubectl create -f registry.yaml --edit -o json Available Commands: clusterrole Create a cluster role clusterrolebinding Create a cluster role binding for a particular cluster role configmap Create a config map from a local file, directory or literal value cronjob Create a cron job with the specified name deployment Create a deployment with the specified name ingress Create an ingress with the specified name job Create a job with the specified name namespace Create a namespace with the specified name poddisruptionbudget Create a pod disruption budget with the specified name priorityclass Create a priority class with the specified name quota Create a quota with the specified name role Create a role with single rule rolebinding Create a role binding for a particular role or cluster role secret Create a secret using specified subcommand service Create a service using a specified subcommand serviceaccount Create a service account with the specified name token Request a service account token Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. --edit=false: Edit the API resource before creating --field-manager='kubectl-create': Name of the manager used to track field ownership. -f, --filename=[]: Filename, directory, or URL to files to use to create the resource -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. -o, --output='': Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file). --raw='': Raw URI to POST to the server. Uses the transport specified by the kubeconfig file. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints. --show-managed-fields=false: If true, keep the managedFields when printing objects in JSON or YAML format. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --validate='strict': Must be one of: strict (or true), warn, ignore (or false). "true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. "warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. "false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields. --windows-line-endings=false: Only relevant if --edit=true. Defaults to the line ending native to your platform. Usage: kubectl create -f FILENAME [options] Use "kubectl --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands). +++ exit code: 0 Recording: run_kubectl_apply_tests Running command: run_kubectl_apply_tests +++ Running case: test-cmd.run_kubectl_apply_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_apply_tests +++ [0513 22:30:39] Creating namespace namespace-1652481039-18762 namespace/namespace-1652481039-18762 created Context "test" modified. +++ [0513 22:30:39] Testing kubectl apply apply.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created apply.sh:34: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:kubectl-client-side-apply has:kubectl-client-side-apply pod "test-pod" deleted apply.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created apply.sh:49: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (Bpod/test-pod configured (dry run) pod/test-pod configured (server dry run) pod/test-pod configured pod "test-pod" deleted apply.sh:65: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:30:41.625174 53075 controller.go:611] quota admission added evaluator for: deployments.apps deployment.apps/test-deployment-retainkeys created I0513 22:30:41.629945 53075 controller.go:611] quota admission added evaluator for: replicasets.apps I0513 22:30:41.635926 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-deployment-retainkeys" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-deployment-retainkeys-569788f666 to 1" I0513 22:30:41.644893 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-deployment-retainkeys-569788f666" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-569788f666-9vvvc" apply.sh:69: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys (BI0513 22:30:42.176509 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-deployment-retainkeys" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set test-deployment-retainkeys-569788f666 to 0 from 1" I0513 22:30:42.196948 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-deployment-retainkeys-569788f666" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: test-deployment-retainkeys-569788f666-9vvvc" I0513 22:30:42.236754 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-deployment-retainkeys" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-deployment-retainkeys-54bb65fd55 to 1" I0513 22:30:42.249259 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-deployment-retainkeys-54bb65fd55" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-54bb65fd55-gvnt4" deployment.apps "test-deployment-retainkeys" deleted apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/selector-test-pod created apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod (BSuccessful (Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found has:pods "selector-test-pod-dont-apply" not found pod "selector-test-pod" deleted apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BW0513 22:30:43.119800 65175 helpers.go:650] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client. pod/test-pod created (dry run) pod/test-pod created (dry run) pod/test-pod created (server dry run) apply.sh:108: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod created pod/test-pod configured (dry run) pod/test-pod configured (server dry run) apply.sh:116: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:632 has:632 pod "test-pod" deleted customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created Successful (Bmessage:resources.mygroup.example.com has:resources.mygroup.example.com I0513 22:30:47.820025 53075 controller.go:611] quota admission added evaluator for: resources.mygroup.example.com kind.mygroup.example.com/myobj created (server dry run) customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted namespace/nsb created apply.sh:182: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:185: Successful get pods a -n nsb {{.metadata.name}}: a (Bpod/b created pod/a pruned apply.sh:189: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b: (Bpod "b" deleted apply.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:201: Successful get pods a {{.metadata.name}}: a (Bapply.sh:203: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/b created apply.sh:208: Successful get pods a {{.metadata.name}}: a (Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b (Bpod "a" deleted pod "b" deleted Successful (Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector has:all resources selected for prune without explicitly passing --all pod/a created pod/b created I0513 22:30:51.857939 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481039-18762/prune-svc" clusterIPs=map[IPv4:10.0.0.210] service/prune-svc created I0513 22:30:53.949974 56663 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1652481037-29023 apply.sh:221: Successful get pods a {{.metadata.name}}: a (Bapply.sh:222: Successful get pods b -n nsb {{.metadata.name}}: b (Bpod "a" deleted pod "b" deleted namespace "nsb" deleted persistentvolumeclaim/a-pvc created I0513 22:31:01.496928 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0513 22:31:01.505096 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" service/prune-svc pruned apply.sh:229: Successful get pvc a-pvc {{.metadata.name}}: a-pvc (Bpersistentvolumeclaim/b-pvc created I0513 22:31:03.137712 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/b-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0513 22:31:03.145660 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/b-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" persistentvolumeclaim/a-pvc pruned I0513 22:31:03.162847 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/a-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" apply.sh:231: Successful get pvc b-pvc {{.metadata.name}}: b-pvc (Bapply.sh:232: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolumeclaim "b-pvc" deleted I0513 22:31:04.762080 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/b-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" apply.sh:237: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:241: Successful get pods a {{.metadata.name}}: a (BI0513 22:31:06.327039 56663 namespace_controller.go:185] Namespace has been deleted nsb I0513 22:31:06.476722 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481039-18762/prune-svc" clusterIPs=map[IPv4:10.0.0.7] service/prune-svc created apply.sh:244: Successful get service prune-svc {{.metadata.name}}: prune-svc (Bapply.sh:245: Successful get pods a {{.metadata.name}}: a (Bservice/prune-svc unchanged pod/a pruned apply.sh:248: Successful get service prune-svc {{.metadata.name}}: prune-svc (Bapply.sh:249: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bservice "prune-svc" deleted namespace/nsb created apply.sh:256: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/a created apply.sh:259: Successful get pods a -n nsb {{.metadata.name}}: a (Bpod/b created apply.sh:262: Successful get pods b -n nsb {{.metadata.name}}: b (Bpod/b unchanged pod/a pruned apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b: (Bnamespace "nsb" deleted Successful (Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation. has:the namespace from the provided object "nsb" does not match the namespace "foo". apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bservice/a created apply.sh:281: Successful get services a {{.metadata.name}}: a (BSuccessful (Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set has:may not change once set I0513 22:31:16.823773 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481039-18762/a" clusterIPs=map[IPv4:10.0.0.12] service/a configured apply.sh:288: Successful get services a {{.spec.clusterIP}}: 10.0.0.12 (Bservice "a" deleted configmap/test-the-map created I0513 22:31:17.186481 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481039-18762/test-the-service" clusterIPs=map[IPv4:10.0.0.8] service/test-the-service created deployment.apps/test-the-deployment created I0513 22:31:17.225744 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6f7568b6b8 to 3" I0513 22:31:17.242740 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-f5lzd" I0513 22:31:17.248943 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-95m8m" I0513 22:31:17.250877 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-2kcvc" apply.sh:294: Successful get configmap test-the-map {{.metadata.name}}: test-the-map (Bapply.sh:295: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment (Bapply.sh:296: Successful get service test-the-service {{.metadata.name}}: test-the-service (Bconfigmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted configmap/test-the-map created I0513 22:31:17.847455 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481039-18762/test-the-service" clusterIPs=map[IPv4:10.0.0.214] service/test-the-service created deployment.apps/test-the-deployment created I0513 22:31:17.874606 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6f7568b6b8 to 3" I0513 22:31:17.883875 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-s7bz4" I0513 22:31:17.889082 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-vsqlf" I0513 22:31:17.890205 56663 event.go:294] "Event occurred" object="namespace-1652481039-18762/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-ntf5c" apply.sh:302: Successful get configmap test-the-map {{.metadata.name}}: test-the-map (Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment (Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service (Bconfigmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted Successful (Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found has:namespaces "multi-resource-ns" not found apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:namespace/multi-resource-ns created Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found has:namespaces "multi-resource-ns" not found Successful (Bmessage:Error from server (NotFound): pods "test-pod" not found has:pods "test-pod" not found pod/test-pod created namespace/multi-resource-ns unchanged apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod (Bpod "test-pod" deleted namespace "multi-resource-ns" deleted I0513 22:31:20.997604 56663 namespace_controller.go:185] Namespace has been deleted nsb apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:configmap/foo created error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1" ensure CRDs are installed first has:no matches for kind "Bogus" in version "example.com/v1" apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo (Bconfigmap "foo" deleted apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:pod/pod-a created pod/pod-c created The Pod "POD-B" is invalid: metadata.name: Invalid value: "POD-B": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') has:The Pod "POD-B" is invalid apply.sh:342: Successful get pods pod-a {{.metadata.name}}: pod-a (Bapply.sh:343: Successful get pods pod-c {{.metadata.name}}: pod-c (Bpod "pod-a" deleted pod "pod-c" deleted apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1" ensure CRDs are installed first has:no matches for kind "Widget" in version "example.com/v1" Successful (Bmessage:Error from server (NotFound): widgets.example.com "foo" not found has:widgets.example.com "foo" not found apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com (BI0513 22:31:28.952296 53075 controller.go:611] quota admission added evaluator for: widgets.example.com widget.example.com/foo created customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged apply.sh:359: Successful get widget foo {{.metadata.name}}: foo (Bwidget.example.com "foo" deleted customresourcedefinition.apiextensions.k8s.io "widgets.example.com" deleted +++ exit code: 0 Recording: run_kubectl_server_side_apply_tests Running command: run_kubectl_server_side_apply_tests +++ Running case: test-cmd.run_kubectl_server_side_apply_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_server_side_apply_tests +++ [0513 22:31:29] Creating namespace namespace-1652481089-3702 namespace/namespace-1652481089-3702 created I0513 22:31:29.278424 56663 namespace_controller.go:185] Namespace has been deleted multi-resource-ns Context "test" modified. +++ [0513 22:31:29] Testing kubectl apply --server-side apply.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod serverside-applied apply.sh:380: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:kubectl has:kubectl pod/test-pod serverside-applied Successful (Bmessage:kubectl my-field-manager has:my-field-manager pod "test-pod" deleted apply.sh:393: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod serverside-applied (server dry run) apply.sh:398: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/test-pod serverside-applied pod/test-pod serverside-applied (server dry run) apply.sh:405: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label (BSuccessful (Bmessage:867 has:867 pod "test-pod" deleted apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ [0513 22:31:31] Testing upgrade kubectl client-side apply to server-side apply pod/test-pod created error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts pod/test-pod serverside-applied Successful (Bmessage:{ "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "name": "test-pod-applied" }, "name": "test-pod", "namespace": "namespace-1652481089-3702" }, "spec": { "containers": [ { "image": "k8s.gcr.io/pause:3.7", "name": "kubernetes-pause" } ] } } has:"name": "test-pod-applied" +++ [0513 22:31:32] Testing downgrade kubectl server-side apply to client-side apply pod/test-pod serverside-applied Successful (Bmessage:{ "apiVersion": "v1", "kind": "Pod", "metadata": { "labels": { "name": "test-pod-label" }, "name": "test-pod", "namespace": "namespace-1652481089-3702" }, "spec": { "containers": [ { "image": "k8s.gcr.io/pause:3.7", "name": "kubernetes-pause" } ] } } has:"name": "test-pod-label" pod/test-pod configured pod "test-pod" deleted customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created Successful (Bmessage:resources.mygroup.example.com has:resources.mygroup.example.com kind.mygroup.example.com/myobj serverside-applied (server dry run) customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted +++ exit code: 0 Recording: run_kubectl_run_tests Running command: run_kubectl_run_tests +++ Running case: test-cmd.run_kubectl_run_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_run_tests +++ [0513 22:31:34] Creating namespace namespace-1652481094-345 namespace/namespace-1652481094-345 created Context "test" modified. +++ [0513 22:31:34] Testing kubectl run pod/nginx-extensions created (dry run) pod/nginx-extensions created (server dry run) run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/nginx-extensions created run.sh:39: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: nginx-extensions: (Bpod "nginx-extensions" deleted Successful (Bmessage:pod/test1 created has:pod/test1 created pod "test1" deleted Successful (Bmessage:error: Invalid image name "InvalidImageName": invalid reference format has:error: Invalid image name "InvalidImageName": invalid reference format +++ exit code: 0 Recording: run_kubectl_create_filter_tests Running command: run_kubectl_create_filter_tests +++ Running case: test-cmd.run_kubectl_create_filter_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_filter_tests +++ [0513 22:31:35] Creating namespace namespace-1652481095-22575 namespace/namespace-1652481095-22575 created Context "test" modified. +++ [0513 22:31:35] Testing kubectl create filter create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/selector-test-pod created create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod (BSuccessful (Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found has:pods "selector-test-pod-dont-apply" not found pod "selector-test-pod" deleted +++ exit code: 0 Recording: run_kubectl_apply_deployments_tests Running command: run_kubectl_apply_deployments_tests +++ Running case: test-cmd.run_kubectl_apply_deployments_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_apply_deployments_tests +++ [0513 22:31:36] Creating namespace namespace-1652481096-1146 namespace/namespace-1652481096-1146 created Context "test" modified. +++ [0513 22:31:36] Testing kubectl apply deployments apps.sh:121: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:122: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:123: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/my-depl created I0513 22:31:37.054959 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/my-depl" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set my-depl-559b67d5b8 to 1" I0513 22:31:37.072105 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/my-depl-559b67d5b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-depl-559b67d5b8-4d55s" apps.sh:127: Successful get deployments my-depl {{.metadata.name}}: my-depl (Bapps.sh:129: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1 (Bapps.sh:130: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1 (Bapps.sh:131: Successful get deployments my-depl {{.metadata.labels.l1}}: l1 (Bdeployment.apps/my-depl configured apps.sh:136: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1 (Bapps.sh:137: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1 (Bapps.sh:138: Successful get deployments my-depl {{.metadata.labels.l1}}: (Bdeployment.apps "my-depl" deleted replicaset.apps "my-depl-559b67d5b8" deleted pod "my-depl-559b67d5b8-4d55s" deleted apps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:145: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:146: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx created I0513 22:31:38.326714 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6cf67855f7 to 3" I0513 22:31:38.335468 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-6cf67855f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6cf67855f7-q7phz" I0513 22:31:38.343032 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-6cf67855f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6cf67855f7-7nksx" I0513 22:31:38.343058 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-6cf67855f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6cf67855f7-h7r76" apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx (BSuccessful (Bmessage:Error from server (Conflict): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1652481096-1146\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}} to: Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment" Name: "nginx", Namespace: "namespace-1652481096-1146" for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again has:Error from server (Conflict) deployment.apps/nginx configured I0513 22:31:46.872908 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-8458596ddd to 3" I0513 22:31:46.880240 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-qshdb" I0513 22:31:46.908983 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-g54rn" I0513 22:31:46.909317 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-qm9t7" Successful (Bmessage: "name": "nginx2" "name": "nginx2" has:"name": "nginx2" Successful (Bmessage:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels` has:Invalid value I0513 22:31:51.194715 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-8458596ddd to 3" I0513 22:31:51.202310 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-69hd7" I0513 22:31:51.208371 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-j7sg2" I0513 22:31:51.208400 56663 event.go:294] "Event occurred" object="namespace-1652481096-1146/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-mfqc6" apps.sh:174: Successful get deployment nginx {{.spec.template.metadata.labels.name}}: nginx2 (Bdeployment.apps "nginx" deleted +++ exit code: 0 Recording: run_kubectl_diff_tests Running command: run_kubectl_diff_tests +++ Running case: test-cmd.run_kubectl_diff_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_diff_tests +++ [0513 22:31:51] Creating namespace namespace-1652481111-9703 namespace/namespace-1652481111-9703 created Context "test" modified. +++ [0513 22:31:51] Testing kubectl diff Successful (Bmessage:diff -u -N /tmp/LIVE-3116795142/v1.Pod.namespace-1652481111-9703.test-pod /tmp/MERGED-1812137486/v1.Pod.namespace-1652481111-9703.test-pod --- /tmp/LIVE-3116795142/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:51.626682328 +0000 +++ /tmp/MERGED-1812137486/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:51.630682670 +0000 @@ -0,0 +1,55 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2022-05-13T22:31:51Z" + labels: + name: test-pod-label + managedFields: + - apiVersion: v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: {} + f:name: {} + f:spec: + f:containers: + k:{"name":"kubernetes-pause"}: + .: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:resources: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:dnsPolicy: {} + f:enableServiceLinks: {} + f:restartPolicy: {} + f:schedulerName: {} + f:securityContext: {} + f:terminationGracePeriodSeconds: {} + manager: kubectl-client-side-apply + operation: Update + time: "2022-05-13T22:31:51Z" + name: test-pod + namespace: namespace-1652481111-9703 + uid: 2c9840d5-71a2-4f65-a64b-f8938e4933eb +spec: + containers: + - image: k8s.gcr.io/pause:3.7 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has:test-pod diff.sh:33: Successful get pod {{range.items}}{{ if eq .metadata.name \"test-pod\" }}found{{end}}{{end}}:: : (Bpod/test-pod created diff.sh:36: Successful get pod {{range.items}}{{ if eq .metadata.name \"test-pod\" }}found{{end}}{{end}}:: found: (BSuccessful (Bmessage:1007 has:1007 Successful (Bmessage:diff -u -N /tmp/LIVE-2010821227/v1.Pod.namespace-1652481111-9703.test-pod /tmp/MERGED-2110708058/v1.Pod.namespace-1652481111-9703.test-pod --- /tmp/LIVE-2010821227/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:52.346743796 +0000 +++ /tmp/MERGED-2110708058/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:52.346743796 +0000 @@ -43,7 +43,7 @@ uid: 139b2285-3ba3-49f2-9599-4ca4d376144d spec: containers: - - image: k8s.gcr.io/pause:3.7 + - image: k8s.gcr.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has:k8s.gcr.io/pause:3.4 Successful (Bmessage:1007 has:1007 Successful (Bmessage:diff -u -N /tmp/LIVE-2679373360/v1.Pod.namespace-1652481111-9703.test-pod /tmp/MERGED-262950272/v1.Pod.namespace-1652481111-9703.test-pod --- /tmp/LIVE-2679373360/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:52.566762577 +0000 +++ /tmp/MERGED-262950272/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:52.570762919 +0000 @@ -3,7 +3,7 @@ metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"test-pod-label"},"name":"test-pod","namespace":"namespace-1652481111-9703"},"spec":{"containers":[{"image":"k8s.gcr.io/pause:3.7","name":"kubernetes-pause"}]}} + {"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"name":"test-pod-label"},"name":"test-pod","namespace":"namespace-1652481111-9703"},"spec":{"containers":[{"image":"k8s.gcr.io/pause:3.4","name":"kubernetes-pause"}]}} creationTimestamp: "2022-05-13T22:31:51Z" labels: name: test-pod-label @@ -12,6 +12,21 @@ fieldsType: FieldsV1 fieldsV1: f:metadata: + f:labels: + f:name: {} + f:spec: + f:containers: + k:{"name":"kubernetes-pause"}: + .: {} + f:image: {} + f:name: {} + manager: kubectl + operation: Apply + time: "2022-05-13T22:31:52Z" + - apiVersion: v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} @@ -22,7 +37,6 @@ f:containers: k:{"name":"kubernetes-pause"}: .: {} - f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} @@ -43,7 +57,7 @@ uid: 139b2285-3ba3-49f2-9599-4ca4d376144d spec: containers: - - image: k8s.gcr.io/pause:3.7 + - image: k8s.gcr.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has:k8s.gcr.io/pause:3.4 Successful (Bmessage:1007 has:1007 The Pod "test" is invalid: spec.containers[0].name: Required value pod "test-pod" deleted +++ [0513 22:31:52] Testing kubectl diff with server-side apply Successful (Bmessage:diff -u -N /tmp/LIVE-2999538090/v1.Pod.namespace-1652481111-9703.test-pod /tmp/MERGED-2791861292/v1.Pod.namespace-1652481111-9703.test-pod --- /tmp/LIVE-2999538090/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:53.002799800 +0000 +++ /tmp/MERGED-2791861292/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:53.002799800 +0000 @@ -0,0 +1,44 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2022-05-13T22:31:53Z" + labels: + name: test-pod-label + managedFields: + - apiVersion: v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + f:name: {} + f:spec: + f:containers: + k:{"name":"kubernetes-pause"}: + .: {} + f:image: {} + f:name: {} + manager: kubectl + operation: Apply + time: "2022-05-13T22:31:53Z" + name: test-pod + namespace: namespace-1652481111-9703 + uid: bde530f4-3d90-45c7-b3ef-6128a9a7653e +spec: + containers: + - image: k8s.gcr.io/pause:3.7 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has:test-pod diff.sh:76: Successful get pod {{range.items}}{{ if eq .metadata.name \"test-pod\" }}found{{end}}{{end}}:: : (Bpod/test-pod serverside-applied diff.sh:80: Successful get pod {{range.items}}{{ if eq .metadata.name \"test-pod\" }}found{{end}}{{end}}:: found: (BSuccessful (Bmessage:diff -u -N /tmp/LIVE-215881356/v1.Pod.namespace-1652481111-9703.test-pod /tmp/MERGED-2424709898/v1.Pod.namespace-1652481111-9703.test-pod --- /tmp/LIVE-215881356/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:53.594850339 +0000 +++ /tmp/MERGED-2424709898/v1.Pod.namespace-1652481111-9703.test-pod 2022-05-13 22:31:53.594850339 +0000 @@ -26,7 +26,7 @@ uid: 85883157-e980-4312-8b3a-6b4e681f6819 spec: containers: - - image: k8s.gcr.io/pause:3.7 + - image: k8s.gcr.io/pause:3.4 imagePullPolicy: IfNotPresent name: kubernetes-pause resources: {} has:k8s.gcr.io/pause:3.4 namespace/nsb created pod/a created diff.sh:94: Successful get pods a -n nsb {{.metadata.name}}: a (BSuccessful (Bmessage:diff -u -N /tmp/LIVE-3170272416/v1.Pod.nsb.b /tmp/MERGED-2655677580/v1.Pod.nsb.b --- /tmp/LIVE-3170272416/v1.Pod.nsb.b 2022-05-13 22:31:54.030887561 +0000 +++ /tmp/MERGED-2655677580/v1.Pod.nsb.b 2022-05-13 22:31:54.034887903 +0000 @@ -0,0 +1,55 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2022-05-13T22:31:54Z" + labels: + prune-group: "true" + managedFields: + - apiVersion: v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: {} + f:prune-group: {} + f:spec: + f:containers: + k:{"name":"kubernetes-pause"}: + .: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:resources: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:dnsPolicy: {} + f:enableServiceLinks: {} + f:restartPolicy: {} + f:schedulerName: {} + f:securityContext: {} + f:terminationGracePeriodSeconds: {} + manager: kubectl-client-side-apply + operation: Update + time: "2022-05-13T22:31:54Z" + name: b + namespace: nsb + uid: 90294625-3f58-49fa-900f-752e9ca9ab99 +spec: + containers: + - image: k8s.gcr.io/pause:3.7 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has not:name: a Successful (Bmessage:diff -u -N /tmp/LIVE-3992679004/v1.Pod.nsb.a /tmp/MERGED-445419692/v1.Pod.nsb.a --- /tmp/LIVE-3992679004/v1.Pod.nsb.a 2022-05-13 22:31:55.387003324 +0000 +++ /tmp/MERGED-445419692/v1.Pod.nsb.a 1970-01-01 00:00:00.000000000 +0000 @@ -1,62 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"prune-group":"true"},"name":"a","namespace":"nsb"},"spec":{"containers":[{"image":"k8s.gcr.io/pause:3.7","name":"kubernetes-pause"}]}} - creationTimestamp: "2022-05-13T22:31:53Z" - labels: - prune-group: "true" - managedFields: - - apiVersion: v1 - fieldsType: FieldsV1 - fieldsV1: - f:metadata: - f:annotations: - .: {} - f:kubectl.kubernetes.io/last-applied-configuration: {} - f:labels: - .: {} - f:prune-group: {} - f:spec: - f:containers: - k:{"name":"kubernetes-pause"}: - .: {} - f:image: {} - f:imagePullPolicy: {} - f:name: {} - f:resources: {} - f:terminationMessagePath: {} - f:terminationMessagePolicy: {} - f:dnsPolicy: {} - f:enableServiceLinks: {} - f:restartPolicy: {} - f:schedulerName: {} - f:securityContext: {} - f:terminationGracePeriodSeconds: {} - manager: kubectl-client-side-apply - operation: Update - time: "2022-05-13T22:31:53Z" - name: a - namespace: nsb - resourceVersion: "1015" - uid: 1064a8ba-ed86-4993-a896-8da1e714efd6 -spec: - containers: - - image: k8s.gcr.io/pause:3.7 - imagePullPolicy: IfNotPresent - name: kubernetes-pause - resources: {} - terminationMessagePath: /dev/termination-log - terminationMessagePolicy: File - dnsPolicy: ClusterFirst - enableServiceLinks: true - preemptionPolicy: PreemptLowerPriority - priority: 0 - restartPolicy: Always - schedulerName: default-scheduler - securityContext: {} - terminationGracePeriodSeconds: 30 -status: - phase: Pending - qosClass: BestEffort diff -u -N /tmp/LIVE-3992679004/v1.Pod.nsb.b /tmp/MERGED-445419692/v1.Pod.nsb.b --- /tmp/LIVE-3992679004/v1.Pod.nsb.b 2022-05-13 22:31:54.186900879 +0000 +++ /tmp/MERGED-445419692/v1.Pod.nsb.b 2022-05-13 22:31:54.186900879 +0000 @@ -0,0 +1,55 @@ +apiVersion: v1 +kind: Pod +metadata: + creationTimestamp: "2022-05-13T22:31:54Z" + labels: + prune-group: "true" + managedFields: + - apiVersion: v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:labels: + .: {} + f:prune-group: {} + f:spec: + f:containers: + k:{"name":"kubernetes-pause"}: + .: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:resources: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:dnsPolicy: {} + f:enableServiceLinks: {} + f:restartPolicy: {} + f:schedulerName: {} + f:securityContext: {} + f:terminationGracePeriodSeconds: {} + manager: kubectl-client-side-apply + operation: Update + time: "2022-05-13T22:31:54Z" + name: b + namespace: nsb + uid: 3a8588f0-4175-4dc1-95f1-d3271f18b2bf +spec: + containers: + - image: k8s.gcr.io/pause:3.7 + imagePullPolicy: IfNotPresent + name: kubernetes-pause + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + dnsPolicy: ClusterFirst + enableServiceLinks: true + preemptionPolicy: PreemptLowerPriority + priority: 0 + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + terminationGracePeriodSeconds: 30 +status: + phase: Pending + qosClass: BestEffort has:name: a pod/b created pod/a pruned diff.sh:107: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b: (Bpod "test-pod" deleted pod "b" deleted +++ exit code: 0 Recording: run_kubectl_diff_same_names Running command: run_kubectl_diff_same_names +++ Running case: test-cmd.run_kubectl_diff_same_names +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_diff_same_names +++ [0513 22:31:58] Creating namespace namespace-1652481118-9708 namespace/namespace-1652481118-9708 created Context "test" modified. +++ [0513 22:31:58] Test kubectl diff with multiple resources with the same name Successful (Bmessage:/tmp/LIVE-848399449 /tmp/LIVE-848399449/v1.ConfigMap.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Secret.namespace-1652481118-9708.test /tmp/LIVE-848399449/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Pod.namespace-1652481118-9708.test /tmp/MERGED-3949399454 /tmp/MERGED-3949399454/v1.ConfigMap.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Secret.namespace-1652481118-9708.test /tmp/MERGED-3949399454/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Pod.namespace-1652481118-9708.test has:v1\.Pod\..*\.test Successful (Bmessage:/tmp/LIVE-848399449 /tmp/LIVE-848399449/v1.ConfigMap.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Secret.namespace-1652481118-9708.test /tmp/LIVE-848399449/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Pod.namespace-1652481118-9708.test /tmp/MERGED-3949399454 /tmp/MERGED-3949399454/v1.ConfigMap.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Secret.namespace-1652481118-9708.test /tmp/MERGED-3949399454/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Pod.namespace-1652481118-9708.test has:apps\.v1\.Deployment\..*\.test Successful (Bmessage:/tmp/LIVE-848399449 /tmp/LIVE-848399449/v1.ConfigMap.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Secret.namespace-1652481118-9708.test /tmp/LIVE-848399449/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Pod.namespace-1652481118-9708.test /tmp/MERGED-3949399454 /tmp/MERGED-3949399454/v1.ConfigMap.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Secret.namespace-1652481118-9708.test /tmp/MERGED-3949399454/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Pod.namespace-1652481118-9708.test has:v1\.ConfigMap\..*\.test Successful (Bmessage:/tmp/LIVE-848399449 /tmp/LIVE-848399449/v1.ConfigMap.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Secret.namespace-1652481118-9708.test /tmp/LIVE-848399449/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/LIVE-848399449/v1.Pod.namespace-1652481118-9708.test /tmp/MERGED-3949399454 /tmp/MERGED-3949399454/v1.ConfigMap.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Secret.namespace-1652481118-9708.test /tmp/MERGED-3949399454/apps.v1.Deployment.namespace-1652481118-9708.test /tmp/MERGED-3949399454/v1.Pod.namespace-1652481118-9708.test has:v1\.Secret\..*\.test +++ exit code: 0 Recording: run_kubectl_get_tests Running command: run_kubectl_get_tests +++ Running case: test-cmd.run_kubectl_get_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_get_tests +++ [0513 22:31:58] Creating namespace namespace-1652481118-5908 namespace/namespace-1652481118-5908 created Context "test" modified. +++ [0513 22:31:58] Testing kubectl get get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Error from server (NotFound): pods "abc" not found has:pods "abc" not found get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Error from server (NotFound): pods "abc" not found has:pods "abc" not found get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:{ "apiVersion": "v1", "items": [], "kind": "List", "metadata": { "resourceVersion": "" } } has not:No resources found Successful (Bmessage:apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" has not:No resources found Successful (Bmessage: has not:No resources found Successful (Bmessage:[] has not:No resources found Successful (Bmessage:[] has not:No resources found Successful (Bmessage:NAME has not:No resources found get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:error: the server doesn't have a resource type "foobar" has not:No resources found Successful (Bmessage:No resources found in namespace-1652481118-5908 namespace. has:No resources found Successful (Bmessage: has not:No resources found Successful (Bmessage:No resources found in namespace-1652481118-5908 namespace. has:No resources found get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:Error from server (NotFound): pods "abc" not found has:pods "abc" not found Successful (Bmessage:Error from server (NotFound): pods "abc" not found has not:List Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/api/v1/namespaces/default/pods 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/api/v1/namespaces/default/replicationcontrollers 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/api/v1/namespaces/default/services 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/apis/apps/v1/namespaces/default/daemonsets 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/apis/apps/v1/namespaces/default/deployments 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/apis/apps/v1/namespaces/default/replicasets 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/apis/apps/v1/namespaces/default/statefulsets 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has:/apis/batch/v1/namespaces/default/jobs 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has not:/apis/extensions/v1beta1/namespaces/default/daemonsets 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has not:/apis/extensions/v1beta1/namespaces/default/deployments 200 OK Successful (Bmessage:I0513 22:32:00.687087 68695 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.691837 68695 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.725433 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I0513 22:32:00.727178 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds I0513 22:32:00.728935 68695 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services 200 OK in 1 milliseconds I0513 22:32:00.730342 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/daemonsets 200 OK in 1 milliseconds I0513 22:32:00.731726 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/deployments 200 OK in 1 milliseconds I0513 22:32:00.732972 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/replicasets 200 OK in 1 milliseconds I0513 22:32:00.734233 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/default/statefulsets 200 OK in 1 milliseconds I0513 22:32:00.735491 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/default/horizontalpodautoscalers 200 OK in 1 milliseconds I0513 22:32:00.736829 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/cronjobs 200 OK in 1 milliseconds I0513 22:32:00.738152 68695 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/default/jobs 200 OK in 1 milliseconds NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 443/TCP 2m56s has not:/apis/extensions/v1beta1/namespaces/default/replicasets 200 OK Successful (Bmessage:I0513 22:32:00.804112 68718 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.809470 68718 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.835279 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=10 200 OK in 2 milliseconds I0513 22:32:00.838088 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06YWdncmVnYXRlLXRvLXZpZXdcdTAwMDAifQ&limit=10 200 OK in 2 milliseconds I0513 22:32:00.840940 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpjZXJ0aWZpY2F0ZS1jb250cm9sbGVyXHUwMDAwIn0&limit=10 200 OK in 2 milliseconds I0513 22:32:00.843060 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpleHBhbmQtY29udHJvbGxlclx1MDAwMCJ9&limit=10 200 OK in 1 milliseconds I0513 22:32:00.845242 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpyZXBsaWNhc2V0LWNvbnRyb2xsZXJcdTAwMDAifQ&limit=10 200 OK in 1 milliseconds I0513 22:32:00.847698 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06ZGlzY292ZXJ5XHUwMDAwIn0&limit=10 200 OK in 1 milliseconds I0513 22:32:00.849637 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06bm9kZS1wcm9ibGVtLWRldGVjdG9yXHUwMDAwIn0&limit=10 200 OK in 1 milliseconds NAME CREATED AT admin 2022-05-13T22:29:02Z aggregation-reader 2022-05-13T22:29:55Z cluster-admin 2022-05-13T22:29:02Z edit 2022-05-13T22:29:02Z pod-admin 2022-05-13T22:29:54Z resource-reader 2022-05-13T22:29:54Z resourcename-reader 2022-05-13T22:29:54Z system:aggregate-to-admin 2022-05-13T22:29:03Z system:aggregate-to-edit 2022-05-13T22:29:03Z system:aggregate-to-view 2022-05-13T22:29:03Z system:auth-delegator 2022-05-13T22:29:03Z system:basic-user 2022-05-13T22:29:02Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2022-05-13T22:29:03Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2022-05-13T22:29:03Z system:certificates.k8s.io:kube-apiserver-client-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:kubelet-serving-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:legacy-unknown-approver 2022-05-13T22:29:03Z system:controller:attachdetach-controller 2022-05-13T22:29:03Z system:controller:certificate-controller 2022-05-13T22:29:03Z system:controller:clusterrole-aggregation-controller 2022-05-13T22:29:03Z system:controller:cronjob-controller 2022-05-13T22:29:03Z system:controller:daemon-set-controller 2022-05-13T22:29:03Z system:controller:deployment-controller 2022-05-13T22:29:03Z system:controller:disruption-controller 2022-05-13T22:29:03Z system:controller:endpoint-controller 2022-05-13T22:29:03Z system:controller:endpointslice-controller 2022-05-13T22:29:03Z system:controller:endpointslicemirroring-controller 2022-05-13T22:29:03Z system:controller:ephemeral-volume-controller 2022-05-13T22:29:03Z system:controller:expand-controller 2022-05-13T22:29:03Z system:controller:generic-garbage-collector 2022-05-13T22:29:03Z system:controller:horizontal-pod-autoscaler 2022-05-13T22:29:03Z system:controller:job-controller 2022-05-13T22:29:03Z system:controller:namespace-controller 2022-05-13T22:29:03Z system:controller:node-controller 2022-05-13T22:29:03Z system:controller:persistent-volume-binder 2022-05-13T22:29:03Z system:controller:pod-garbage-collector 2022-05-13T22:29:03Z system:controller:pv-protection-controller 2022-05-13T22:29:03Z system:controller:pvc-protection-controller 2022-05-13T22:29:03Z system:controller:replicaset-controller 2022-05-13T22:29:03Z system:controller:replication-controller 2022-05-13T22:29:03Z system:controller:resourcequota-controller 2022-05-13T22:29:03Z system:controller:root-ca-cert-publisher 2022-05-13T22:29:03Z system:controller:route-controller 2022-05-13T22:29:03Z system:controller:service-account-controller 2022-05-13T22:29:03Z system:controller:service-controller 2022-05-13T22:29:03Z system:controller:statefulset-controller 2022-05-13T22:29:03Z system:controller:ttl-after-finished-controller 2022-05-13T22:29:03Z system:controller:ttl-controller 2022-05-13T22:29:03Z system:discovery 2022-05-13T22:29:02Z system:heapster 2022-05-13T22:29:03Z system:kube-aggregator 2022-05-13T22:29:03Z system:kube-controller-manager 2022-05-13T22:29:03Z system:kube-dns 2022-05-13T22:29:03Z system:kube-scheduler 2022-05-13T22:29:03Z system:kubelet-api-admin 2022-05-13T22:29:03Z system:monitoring 2022-05-13T22:29:02Z system:node 2022-05-13T22:29:03Z system:node-bootstrapper 2022-05-13T22:29:03Z system:node-problem-detector 2022-05-13T22:29:03Z system:node-proxier 2022-05-13T22:29:03Z system:persistent-volume-provisioner 2022-05-13T22:29:03Z system:public-info-viewer 2022-05-13T22:29:02Z system:service-account-issuer-discovery 2022-05-13T22:29:03Z system:volume-scheduler 2022-05-13T22:29:03Z url-reader 2022-05-13T22:29:55Z view 2022-05-13T22:29:02Z has:/clusterroles?limit=10 200 OK Successful (Bmessage:I0513 22:32:00.804112 68718 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.809470 68718 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.835279 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=10 200 OK in 2 milliseconds I0513 22:32:00.838088 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06YWdncmVnYXRlLXRvLXZpZXdcdTAwMDAifQ&limit=10 200 OK in 2 milliseconds I0513 22:32:00.840940 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpjZXJ0aWZpY2F0ZS1jb250cm9sbGVyXHUwMDAwIn0&limit=10 200 OK in 2 milliseconds I0513 22:32:00.843060 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpleHBhbmQtY29udHJvbGxlclx1MDAwMCJ9&limit=10 200 OK in 1 milliseconds I0513 22:32:00.845242 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06Y29udHJvbGxlcjpyZXBsaWNhc2V0LWNvbnRyb2xsZXJcdTAwMDAifQ&limit=10 200 OK in 1 milliseconds I0513 22:32:00.847698 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06ZGlzY292ZXJ5XHUwMDAwIn0&limit=10 200 OK in 1 milliseconds I0513 22:32:00.849637 68718 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6MTAzMywic3RhcnQiOiJzeXN0ZW06bm9kZS1wcm9ibGVtLWRldGVjdG9yXHUwMDAwIn0&limit=10 200 OK in 1 milliseconds NAME CREATED AT admin 2022-05-13T22:29:02Z aggregation-reader 2022-05-13T22:29:55Z cluster-admin 2022-05-13T22:29:02Z edit 2022-05-13T22:29:02Z pod-admin 2022-05-13T22:29:54Z resource-reader 2022-05-13T22:29:54Z resourcename-reader 2022-05-13T22:29:54Z system:aggregate-to-admin 2022-05-13T22:29:03Z system:aggregate-to-edit 2022-05-13T22:29:03Z system:aggregate-to-view 2022-05-13T22:29:03Z system:auth-delegator 2022-05-13T22:29:03Z system:basic-user 2022-05-13T22:29:02Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2022-05-13T22:29:03Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2022-05-13T22:29:03Z system:certificates.k8s.io:kube-apiserver-client-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:kubelet-serving-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:legacy-unknown-approver 2022-05-13T22:29:03Z system:controller:attachdetach-controller 2022-05-13T22:29:03Z system:controller:certificate-controller 2022-05-13T22:29:03Z system:controller:clusterrole-aggregation-controller 2022-05-13T22:29:03Z system:controller:cronjob-controller 2022-05-13T22:29:03Z system:controller:daemon-set-controller 2022-05-13T22:29:03Z system:controller:deployment-controller 2022-05-13T22:29:03Z system:controller:disruption-controller 2022-05-13T22:29:03Z system:controller:endpoint-controller 2022-05-13T22:29:03Z system:controller:endpointslice-controller 2022-05-13T22:29:03Z system:controller:endpointslicemirroring-controller 2022-05-13T22:29:03Z system:controller:ephemeral-volume-controller 2022-05-13T22:29:03Z system:controller:expand-controller 2022-05-13T22:29:03Z system:controller:generic-garbage-collector 2022-05-13T22:29:03Z system:controller:horizontal-pod-autoscaler 2022-05-13T22:29:03Z system:controller:job-controller 2022-05-13T22:29:03Z system:controller:namespace-controller 2022-05-13T22:29:03Z system:controller:node-controller 2022-05-13T22:29:03Z system:controller:persistent-volume-binder 2022-05-13T22:29:03Z system:controller:pod-garbage-collector 2022-05-13T22:29:03Z system:controller:pv-protection-controller 2022-05-13T22:29:03Z system:controller:pvc-protection-controller 2022-05-13T22:29:03Z system:controller:replicaset-controller 2022-05-13T22:29:03Z system:controller:replication-controller 2022-05-13T22:29:03Z system:controller:resourcequota-controller 2022-05-13T22:29:03Z system:controller:root-ca-cert-publisher 2022-05-13T22:29:03Z system:controller:route-controller 2022-05-13T22:29:03Z system:controller:service-account-controller 2022-05-13T22:29:03Z system:controller:service-controller 2022-05-13T22:29:03Z system:controller:statefulset-controller 2022-05-13T22:29:03Z system:controller:ttl-after-finished-controller 2022-05-13T22:29:03Z system:controller:ttl-controller 2022-05-13T22:29:03Z system:discovery 2022-05-13T22:29:02Z system:heapster 2022-05-13T22:29:03Z system:kube-aggregator 2022-05-13T22:29:03Z system:kube-controller-manager 2022-05-13T22:29:03Z system:kube-dns 2022-05-13T22:29:03Z system:kube-scheduler 2022-05-13T22:29:03Z system:kubelet-api-admin 2022-05-13T22:29:03Z system:monitoring 2022-05-13T22:29:02Z system:node 2022-05-13T22:29:03Z system:node-bootstrapper 2022-05-13T22:29:03Z system:node-problem-detector 2022-05-13T22:29:03Z system:node-proxier 2022-05-13T22:29:03Z system:persistent-volume-provisioner 2022-05-13T22:29:03Z system:public-info-viewer 2022-05-13T22:29:02Z system:service-account-issuer-discovery 2022-05-13T22:29:03Z system:volume-scheduler 2022-05-13T22:29:03Z url-reader 2022-05-13T22:29:55Z view 2022-05-13T22:29:02Z has:/v1/clusterroles?continue= Successful (Bmessage:I0513 22:32:00.898883 68731 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:00.904276 68731 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:00.933894 68731 round_trippers.go:553] GET https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500 200 OK in 5 milliseconds NAME CREATED AT admin 2022-05-13T22:29:02Z aggregation-reader 2022-05-13T22:29:55Z cluster-admin 2022-05-13T22:29:02Z edit 2022-05-13T22:29:02Z pod-admin 2022-05-13T22:29:54Z resource-reader 2022-05-13T22:29:54Z resourcename-reader 2022-05-13T22:29:54Z system:aggregate-to-admin 2022-05-13T22:29:03Z system:aggregate-to-edit 2022-05-13T22:29:03Z system:aggregate-to-view 2022-05-13T22:29:03Z system:auth-delegator 2022-05-13T22:29:03Z system:basic-user 2022-05-13T22:29:02Z system:certificates.k8s.io:certificatesigningrequests:nodeclient 2022-05-13T22:29:03Z system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2022-05-13T22:29:03Z system:certificates.k8s.io:kube-apiserver-client-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:kubelet-serving-approver 2022-05-13T22:29:03Z system:certificates.k8s.io:legacy-unknown-approver 2022-05-13T22:29:03Z system:controller:attachdetach-controller 2022-05-13T22:29:03Z system:controller:certificate-controller 2022-05-13T22:29:03Z system:controller:clusterrole-aggregation-controller 2022-05-13T22:29:03Z system:controller:cronjob-controller 2022-05-13T22:29:03Z system:controller:daemon-set-controller 2022-05-13T22:29:03Z system:controller:deployment-controller 2022-05-13T22:29:03Z system:controller:disruption-controller 2022-05-13T22:29:03Z system:controller:endpoint-controller 2022-05-13T22:29:03Z system:controller:endpointslice-controller 2022-05-13T22:29:03Z system:controller:endpointslicemirroring-controller 2022-05-13T22:29:03Z system:controller:ephemeral-volume-controller 2022-05-13T22:29:03Z system:controller:expand-controller 2022-05-13T22:29:03Z system:controller:generic-garbage-collector 2022-05-13T22:29:03Z system:controller:horizontal-pod-autoscaler 2022-05-13T22:29:03Z system:controller:job-controller 2022-05-13T22:29:03Z system:controller:namespace-controller 2022-05-13T22:29:03Z system:controller:node-controller 2022-05-13T22:29:03Z system:controller:persistent-volume-binder 2022-05-13T22:29:03Z system:controller:pod-garbage-collector 2022-05-13T22:29:03Z system:controller:pv-protection-controller 2022-05-13T22:29:03Z system:controller:pvc-protection-controller 2022-05-13T22:29:03Z system:controller:replicaset-controller 2022-05-13T22:29:03Z system:controller:replication-controller 2022-05-13T22:29:03Z system:controller:resourcequota-controller 2022-05-13T22:29:03Z system:controller:root-ca-cert-publisher 2022-05-13T22:29:03Z system:controller:route-controller 2022-05-13T22:29:03Z system:controller:service-account-controller 2022-05-13T22:29:03Z system:controller:service-controller 2022-05-13T22:29:03Z system:controller:statefulset-controller 2022-05-13T22:29:03Z system:controller:ttl-after-finished-controller 2022-05-13T22:29:03Z system:controller:ttl-controller 2022-05-13T22:29:03Z system:discovery 2022-05-13T22:29:02Z system:heapster 2022-05-13T22:29:03Z system:kube-aggregator 2022-05-13T22:29:03Z system:kube-controller-manager 2022-05-13T22:29:03Z system:kube-dns 2022-05-13T22:29:03Z system:kube-scheduler 2022-05-13T22:29:03Z system:kubelet-api-admin 2022-05-13T22:29:03Z system:monitoring 2022-05-13T22:29:02Z system:node 2022-05-13T22:29:03Z system:node-bootstrapper 2022-05-13T22:29:03Z system:node-problem-detector 2022-05-13T22:29:03Z system:node-proxier 2022-05-13T22:29:03Z system:persistent-volume-provisioner 2022-05-13T22:29:03Z system:public-info-viewer 2022-05-13T22:29:02Z system:service-account-issuer-discovery 2022-05-13T22:29:03Z system:volume-scheduler 2022-05-13T22:29:03Z url-reader 2022-05-13T22:29:55Z view 2022-05-13T22:29:02Z has:/clusterroles?limit=500 200 OK Successful (Bmessage:default Active 2m57s kube-node-lease Active 3m kube-public Active 3m kube-system Active 3m namespace-1652480982-11192 Active 2m19s namespace-1652480982-23886 Active 2m19s namespace-1652480983-20082 Active 2m18s namespace-1652480985-26870 Active 2m16s namespace-1652480993-8368 Active 2m8s namespace-1652481001-2375 Active 2m namespace-1652481005-22225 Active 116s namespace-1652481005-3273 Active 116s namespace-1652481008-14417 Active 113s namespace-1652481009-28266 Active 112s namespace-1652481009-4588 Active 112s namespace-1652481020-13892 Active 102s namespace-1652481020-21605 Active 102s namespace-1652481033-24918 Active 89s namespace-1652481033-5199 Active 89s namespace-1652481035-6580 Active 87s namespace-1652481036-9652 Active 87s namespace-1652481037-29023 Active 86s namespace-1652481039-18762 Active 84s namespace-1652481039-21952 Active 84s namespace-1652481089-3702 Active 35s namespace-1652481094-345 Active 30s namespace-1652481095-22575 Active 29s namespace-1652481096-1146 Active 28s namespace-1652481111-9703 Active 13s namespace-1652481118-5908 Active 7s namespace-1652481118-9708 Active 7s nsb Active 12s has:default Successful (Bmessage:default Active 2m57s kube-node-lease Active 3m kube-public Active 3m kube-system Active 3m namespace-1652480982-11192 Active 2m19s namespace-1652480982-23886 Active 2m19s namespace-1652480983-20082 Active 2m18s namespace-1652480985-26870 Active 2m16s namespace-1652480993-8368 Active 2m8s namespace-1652481001-2375 Active 2m namespace-1652481005-22225 Active 116s namespace-1652481005-3273 Active 116s namespace-1652481008-14417 Active 113s namespace-1652481009-28266 Active 112s namespace-1652481009-4588 Active 112s namespace-1652481020-13892 Active 102s namespace-1652481020-21605 Active 102s namespace-1652481033-24918 Active 89s namespace-1652481033-5199 Active 89s namespace-1652481035-6580 Active 87s namespace-1652481036-9652 Active 87s namespace-1652481037-29023 Active 86s namespace-1652481039-18762 Active 84s namespace-1652481039-21952 Active 84s namespace-1652481089-3702 Active 35s namespace-1652481094-345 Active 30s namespace-1652481095-22575 Active 29s namespace-1652481096-1146 Active 28s namespace-1652481111-9703 Active 13s namespace-1652481118-5908 Active 7s namespace-1652481118-9708 Active 7s nsb Active 12s has:kube-public Successful (Bmessage:default Active 2m57s kube-node-lease Active 3m kube-public Active 3m kube-system Active 3m namespace-1652480982-11192 Active 2m19s namespace-1652480982-23886 Active 2m19s namespace-1652480983-20082 Active 2m18s namespace-1652480985-26870 Active 2m16s namespace-1652480993-8368 Active 2m8s namespace-1652481001-2375 Active 2m namespace-1652481005-22225 Active 116s namespace-1652481005-3273 Active 116s namespace-1652481008-14417 Active 113s namespace-1652481009-28266 Active 112s namespace-1652481009-4588 Active 112s namespace-1652481020-13892 Active 102s namespace-1652481020-21605 Active 102s namespace-1652481033-24918 Active 89s namespace-1652481033-5199 Active 89s namespace-1652481035-6580 Active 87s namespace-1652481036-9652 Active 87s namespace-1652481037-29023 Active 86s namespace-1652481039-18762 Active 84s namespace-1652481039-21952 Active 84s namespace-1652481089-3702 Active 35s namespace-1652481094-345 Active 30s namespace-1652481095-22575 Active 29s namespace-1652481096-1146 Active 28s namespace-1652481111-9703 Active 13s namespace-1652481118-5908 Active 7s namespace-1652481118-9708 Active 7s nsb Active 12s has:kube-system get.sh:137: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"one\" }}found{{end}}{{end}}:: : (Bget.sh:138: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"two\" }}found{{end}}{{end}}:: : (Bget.sh:139: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"three\" }}found{{end}}{{end}}:: : (Bconfigmap/one created configmap/two created configmap/three created Successful (Bmessage:NAME DATA AGE kube-root-ca.crt 1 7s one 0 0s three 0 0s two 0 0s has not:watch is only supported on individual resources Successful (Bmessage: has not:watch is only supported on individual resources +++ [0513 22:32:08] Creating namespace namespace-1652481128-17965 namespace/namespace-1652481128-17965 created Context "test" modified. get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2022-05-13T22:32:08Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1652481128-17965", "resourceVersion": "1045", "uid": "326ee36f-4f37-420e-af3e-868af6759f7d" }, "spec": { "containers": [ { "image": "k8s.gcr.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template: template was: {.missing} object given to jsonpath engine was: map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2022-05-13T22:32:08Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2022-05-13T22:32:08Z"}}, "name":"valid-pod", "namespace":"namespace-1652481128-17965", "resourceVersion":"1045", "uid":"326ee36f-4f37-420e-af3e-868af6759f7d"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}} has:missing is not found error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing" Successful (Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template: template was: {{.missing}} raw data was: {"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2022-05-13T22:32:08Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2022-05-13T22:32:08Z"}],"name":"valid-pod","namespace":"namespace-1652481128-17965","resourceVersion":"1045","uid":"326ee36f-4f37-420e-af3e-868af6759f7d"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}} object given to template engine was: map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2022-05-13T22:32:08Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2022-05-13T22:32:08Z]] name:valid-pod namespace:namespace-1652481128-17965 resourceVersion:1045 uid:326ee36f-4f37-420e-af3e-868af6759f7d] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]] has:map has no entry for key "missing" Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:valid-pod Successful (Bmessage:Error from server (NotFound): the server could not find the requested resource has:the server could not find the requested resource Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:STATUS Successful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:valid-pod Successful (Bmessage:pod/valid-pod has not:STATUS Successful (Bmessage:pod/valid-pod has:pod/valid-pod Successful (Bmessage:apiVersion: v1 kind: Pod metadata: creationTimestamp: "2022-05-13T22:32:08Z" labels: name: valid-pod name: valid-pod namespace: namespace-1652481128-17965 resourceVersion: "1045" uid: 326ee36f-4f37-420e-af3e-868af6759f7d spec: containers: - image: k8s.gcr.io/serve_hostname imagePullPolicy: Always name: kubernetes-serve-hostname resources: limits: cpu: "1" memory: 512Mi requests: cpu: "1" memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: phase: Pending qosClass: Guaranteed has not:STATUS Successful (Bmessage:apiVersion: v1 kind: Pod metadata: creationTimestamp: "2022-05-13T22:32:08Z" labels: name: valid-pod name: valid-pod namespace: namespace-1652481128-17965 resourceVersion: "1045" uid: 326ee36f-4f37-420e-af3e-868af6759f7d spec: containers: - image: k8s.gcr.io/serve_hostname imagePullPolicy: Always name: kubernetes-serve-hostname resources: limits: cpu: "1" memory: 512Mi requests: cpu: "1" memory: 512Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: phase: Pending qosClass: Guaranteed has:name: valid-pod Successful (Bmessage:Error from server (NotFound): pods "invalid-pod" not found has:"invalid-pod" not found pod "valid-pod" deleted get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/redis-master created pod/valid-pod created Successful (Bmessage:redis-master valid-pod has:redis-master valid-pod pod "redis-master" deleted pod "valid-pod" deleted get.sh:218: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:219: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:220: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bconfigmap/test-the-map created I0513 22:32:13.277536 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481128-17965/test-the-service" clusterIPs=map[IPv4:10.0.0.208] service/test-the-service created deployment.apps/test-the-deployment created I0513 22:32:13.300591 56663 event.go:294] "Event occurred" object="namespace-1652481128-17965/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6f7568b6b8 to 3" I0513 22:32:13.308151 56663 event.go:294] "Event occurred" object="namespace-1652481128-17965/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-4zcvd" I0513 22:32:13.314427 56663 event.go:294] "Event occurred" object="namespace-1652481128-17965/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-zqfxp" I0513 22:32:13.314462 56663 event.go:294] "Event occurred" object="namespace-1652481128-17965/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-sh578" Successful (Bmessage:test-the-map test-the-service test-the-deployment has:test-the-map Successful (Bmessage:test-the-map test-the-service test-the-deployment has:test-the-deployment Successful (Bmessage:test-the-map test-the-service test-the-deployment has:test-the-service configmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted get.sh:235: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:236: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:237: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_kubectl_exec_pod_tests Running command: run_kubectl_exec_pod_tests +++ Running case: test-cmd.run_kubectl_exec_pod_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_exec_pod_tests +++ [0513 22:32:13] Creating namespace namespace-1652481133-16194 namespace/namespace-1652481133-16194 created Context "test" modified. +++ [0513 22:32:13] Testing kubectl exec POD COMMAND Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): pods "abc" not found has:pods "abc" not found pod/test-pod created Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:pods "test-pod" not found Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:pod or type/name must be specified pod "test-pod" deleted +++ exit code: 0 Recording: run_kubectl_exec_resource_name_tests Running command: run_kubectl_exec_resource_name_tests +++ Running case: test-cmd.run_kubectl_exec_resource_name_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_exec_resource_name_tests +++ [0513 22:32:14] Creating namespace namespace-1652481134-32466 namespace/namespace-1652481134-32466 created Context "test" modified. +++ [0513 22:32:14] Testing kubectl exec TYPE/NAME COMMAND Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: the server doesn't have a resource type "foo" has:error: Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (NotFound): deployments.apps "bar" not found has:"bar" not found pod/test-pod created replicaset.apps/frontend created I0513 22:32:15.449162 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-x5jvw" I0513 22:32:15.456907 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-k4cqh" I0513 22:32:15.456935 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-vw87c" configmap/test-set-env-config created Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented has:not implemented Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:not found Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod test-pod does not have a host assigned has not:pod, type/name or --filename must be specified Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod frontend-k4cqh does not have a host assigned has not:not found Successful (Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Error from server (BadRequest): pod frontend-k4cqh does not have a host assigned has not:pod, type/name or --filename must be specified pod "test-pod" deleted replicaset.apps "frontend" deleted configmap "test-set-env-config" deleted +++ exit code: 0 Recording: run_create_secret_tests Running command: run_create_secret_tests +++ Running case: test-cmd.run_create_secret_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_create_secret_tests Successful (Bmessage:Error from server (NotFound): secrets "mysecret" not found has:secrets "mysecret" not found Successful (Bmessage:user-specified has:user-specified Successful (Bmessage:Error from server (NotFound): secrets "mysecret" not found has:secrets "mysecret" not found Successful (B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"585870cf-72f0-4eac-b83e-da58614e58df","resourceVersion":"1124","creationTimestamp":"2022-05-13T22:32:16Z"}} Successful (Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"585870cf-72f0-4eac-b83e-da58614e58df","resourceVersion":"1125","creationTimestamp":"2022-05-13T22:32:16Z"},"data":{"key1":"config1"}} has:uid Successful (Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"585870cf-72f0-4eac-b83e-da58614e58df","resourceVersion":"1125","creationTimestamp":"2022-05-13T22:32:16Z"},"data":{"key1":"config1"}} has:config1 {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"585870cf-72f0-4eac-b83e-da58614e58df"}} Successful (Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found has:configmaps "tester-update-cm" not found +++ exit code: 0 Recording: run_kubectl_create_kustomization_directory_tests Running command: run_kubectl_create_kustomization_directory_tests +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_kustomization_directory_tests create.sh:126: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: (Bcreate.sh:127: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bcreate.sh:128: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bconfigmap/test-the-map created I0513 22:32:17.250722 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481134-32466/test-the-service" clusterIPs=map[IPv4:10.0.0.129] service/test-the-service created deployment.apps/test-the-deployment created I0513 22:32:17.266354 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/test-the-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-6f7568b6b8 to 3" I0513 22:32:17.272706 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-ng2br" I0513 22:32:17.279670 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-w8bqm" I0513 22:32:17.279713 56663 event.go:294] "Event occurred" object="namespace-1652481134-32466/test-the-deployment-6f7568b6b8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-6f7568b6b8-27m99" create.sh:134: Successful get configmap test-the-map {{.metadata.name}}: test-the-map (Bcreate.sh:135: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment (Bcreate.sh:136: Successful get service test-the-service {{.metadata.name}}: test-the-service (Bconfigmap "test-the-map" deleted service "test-the-service" deleted deployment.apps "test-the-deployment" deleted +++ exit code: 0 Recording: run_kubectl_create_validate_tests Running command: run_kubectl_create_validate_tests +++ Running case: test-cmd.run_kubectl_create_validate_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_create_validate_tests +++ [0513 22:32:17] Creating namespace namespace-1652481137-26432 namespace/namespace-1652481137-26432 created Context "test" modified. +++ [0513 22:32:17] Testing kubectl create --validate=true Successful message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false has either:strict decoding error or:error validating data +++ [0513 22:32:17] Testing kubectl create --validate=false Successful (Bmessage:deployment.apps/invalid-nginx-deployment created has:deployment.apps/invalid-nginx-deployment created I0513 22:32:17.999649 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-5fdd67897d to 4" I0513 22:32:18.052012 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-kfppv" deployment.apps "invalid-nginx-deployment" deleted I0513 22:32:18.063761 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-vxvt7" I0513 22:32:18.063791 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-wnfh6" I0513 22:32:18.076382 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-2ngjv" +++ [0513 22:32:18] Testing kubectl create --validate=strict E0513 22:32:18.120432 56663 replica_set.go:550] sync "namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" failed with replicasets.apps "invalid-nginx-deployment-5fdd67897d" not found Successful message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false has either:strict decoding error or:error validating data +++ [0513 22:32:18] Testing kubectl create --validate=warn W0513 22:32:18.365454 70091 schema.go:146] cannot perform warn validation if server-side field validation is unsupported, skipping validation Successful (Bmessage:deployment.apps/invalid-nginx-deployment created has:deployment.apps/invalid-nginx-deployment created I0513 22:32:18.379648 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-5fdd67897d to 4" I0513 22:32:18.387407 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-q8bsk" I0513 22:32:18.393951 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-xsnz9" I0513 22:32:18.395631 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-h9vsw" I0513 22:32:18.422560 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-8kdq5" deployment.apps "invalid-nginx-deployment" deleted +++ [0513 22:32:18] Testing kubectl create --validate=ignore E0513 22:32:18.471407 56663 replica_set.go:550] sync "namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" failed with Operation cannot be fulfilled on replicasets.apps "invalid-nginx-deployment-5fdd67897d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: aa943204-a90a-4f0d-a5dd-ad1e68488552, UID in object meta: Successful (Bmessage:deployment.apps/invalid-nginx-deployment created has:deployment.apps/invalid-nginx-deployment created I0513 22:32:18.547874 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-5fdd67897d to 4" I0513 22:32:18.557878 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-29jwn" I0513 22:32:18.564611 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-5bppt" I0513 22:32:18.566805 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-stmpq" I0513 22:32:18.571870 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-5fdd67897d-dv9rl" deployment.apps "invalid-nginx-deployment" deleted +++ [0513 22:32:18] Testing kubectl create E0513 22:32:18.610657 56663 replica_set.go:550] sync "namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d" failed with Operation cannot be fulfilled on replicasets.apps "invalid-nginx-deployment-5fdd67897d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1652481137-26432/invalid-nginx-deployment-5fdd67897d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9183a4ef-5da4-4584-9e30-891eb6ad58c0, UID in object meta: Successful message:error: error validating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "baz" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): unknown field "foo" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false has either:strict decoding error or:error validating data +++ [0513 22:32:18] Testing kubectl create --validate=foo Successful (Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false) has:invalid - validate option "foo" +++ exit code: 0 Recording: run_convert_tests Running command: run_convert_tests +++ Running case: test-cmd.run_convert_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_convert_tests convert.sh:27: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx created I0513 22:32:19.109566 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-585d4bd5c9 to 3" I0513 22:32:19.116769 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-585d4bd5c9-ml6qk" I0513 22:32:19.125032 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-585d4bd5c9-9ns2w" I0513 22:32:19.125077 56663 event.go:294] "Event occurred" object="namespace-1652481137-26432/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-585d4bd5c9-nn9qj" convert.sh:31: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx: (Bconvert.sh:32: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bconvert.sh:36: Successful get deployment nginx {{ .apiVersion }}: apps/v1 (BSuccessful (Bmessage:apiVersion: apps/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: name: nginx-undo name: nginx spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: name: nginx-undo strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: nginx-undo spec: containers: - image: k8s.gcr.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: {} has:apps/v1beta1 deployment.apps "nginx" deleted Successful (Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing Successful (Bmessage:nginx: has:nginx: +++ exit code: 0 Recording: run_kubectl_delete_allnamespaces_tests Running command: run_kubectl_delete_allnamespaces_tests +++ Running case: test-cmd.run_kubectl_delete_allnamespaces_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_delete_allnamespaces_tests namespace/namespace-1652481139-6998 created namespace/namespace-1652481139-15822 created configmap/one created configmap/two created configmap/one labeled configmap/two labeled configmap "two" deleted (dry run) configmap "one" deleted (dry run) configmap "two" deleted (server dry run) configmap "one" deleted (server dry run) Context "test" modified. delete.sh:40: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: one: (BContext "test" modified. delete.sh:42: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: two: (Bconfigmap "two" deleted configmap "one" deleted Context "test" modified. delete.sh:48: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: (BContext "test" modified. delete.sh:50: Successful get configmap -l deletetest {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_kubectl_request_timeout_tests Running command: run_kubectl_request_timeout_tests +++ Running case: test-cmd.run_kubectl_request_timeout_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_request_timeout_tests +++ [0513 22:32:20] Testing kubectl request timeout +++ [0513 22:32:20] Creating namespace namespace-1652481140-19966 namespace/namespace-1652481140-19966 created Context "test" modified. request-timeout.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2022-05-13T22:32:21Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1652481140-19966", "resourceVersion": "1276", "uid": "79711819-125a-4be3-b422-f0382fcbeed9" }, "spec": { "containers": [ { "image": "k8s.gcr.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } request-timeout.sh:34: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s has:valid-pod FAIL! (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 0s I0513 22:32:22.377453 70759 streamwatcher.go:114] Unable to decode an event from the watch stream: context deadline exceeded has not:Timeout 42 /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/request-timeout.sh !!! [0513 22:32:22] Call tree: !!! [0513 22:32:22] 1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_kubectl_request_timeout_tests(...) !!! [0513 22:32:22] 2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...) !!! [0513 22:32:22] 3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:138 juLog(...) !!! [0513 22:32:22] 4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:597 record_command(...) !!! [0513 22:32:22] 5: hack/make-rules/test-cmd.sh:194 runTests(...) +++ exit code: 1 +++ error: 1 Error when running run_kubectl_request_timeout_tests Recording: run_crd_tests Running command: run_crd_tests +++ Running case: test-cmd.run_crd_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_crd_tests +++ [0513 22:32:22] Creating namespace namespace-1652481142-24542 namespace/namespace-1652481142-24542 created Context "test" modified. +++ [0513 22:32:22] Testing kubectl crd customresourcedefinition.apiextensions.k8s.io/foos.company.com created crd.sh:73: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: foos.company.com: (Bcustomresourcedefinition.apiextensions.k8s.io/bars.company.com created W0513 22:32:22.967349 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta W0513 22:32:22.967378 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause W0513 22:32:22.967385 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1 W0513 22:32:22.967391 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference W0513 22:32:22.967397 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails W0513 22:32:22.967402 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions W0513 22:32:22.967407 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions W0513 22:32:22.967414 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Patch W0513 22:32:22.967420 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry W0513 22:32:22.967425 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Time W0513 22:32:22.967430 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Status W0513 22:32:22.967436 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta crd.sh:107: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com: (Bcustomresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created crd.sh:146: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\" \"resources.mygroup.example.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com: (Bcustomresourcedefinition.apiextensions.k8s.io/validfoos.company.com created W0513 22:32:23.936401 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference W0513 22:32:23.936435 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta W0513 22:32:23.936441 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Patch W0513 22:32:23.936448 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Time W0513 22:32:23.936453 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause W0513 22:32:23.936458 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta W0513 22:32:23.936464 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails W0513 22:32:23.936470 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions W0513 22:32:23.936476 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry W0513 22:32:23.936481 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1 W0513 22:32:23.936486 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions W0513 22:32:23.936491 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Status W0513 22:32:23.936498 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1 W0513 22:32:23.936503 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry W0513 22:32:23.936508 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause W0513 22:32:23.936513 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Patch W0513 22:32:23.936518 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta W0513 22:32:23.936523 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference W0513 22:32:23.936528 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Status W0513 22:32:23.936535 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta W0513 22:32:23.936540 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions W0513 22:32:23.936545 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Time W0513 22:32:23.936550 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails W0513 22:32:23.936555 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions crd.sh:188: Successful get customresourcedefinitions {{range.items}}{{if eq .metadata.name \"foos.company.com\" \"bars.company.com\" \"resources.mygroup.example.com\" \"validfoos.company.com\"}}{{.metadata.name}}:{{end}}{{end}}: bars.company.com:foos.company.com:resources.mygroup.example.com:validfoos.company.com: (B+++ [0513 22:32:24] Creating namespace namespace-1652481144-7724 namespace/namespace-1652481144-7724 created Context "test" modified. +++ [0513 22:32:24] Testing kubectl non-native resources {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"company.com/v1","resources":[{"name":"validfoos","singularName":"validfoo","namespaced":true,"kind":"ValidFoo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"mHoViSBo05k="},{"name":"foos","singularName":"foo","namespaced":true,"kind":"Foo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"xIRtouR4Ix8="},{"name":"bars","singularName":"bar","namespaced":true,"kind":"Bar","verbs":["delete","deletecollection","get","list","patch","create","update","watch"],"storageVersionHash":"5GMNuFRm/lM="}]} {"apiVersion":"company.com/v1","items":[],"kind":"FooList","metadata":{"continue":"","resourceVersion":"1299"}} {"apiVersion":"company.com/v1","items":[],"kind":"BarList","metadata":{"continue":"","resourceVersion":"1299"}} crd.sh:233: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:236: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:239: Successful get resources {{range.items}}{{.metadata.name}}:{{end}}: (Bkind.mygroup.example.com/myobj created Successful (Bmessage:kind.mygroup.example.com/myobj has:kind.mygroup.example.com/myobj Successful (Bmessage:kind.mygroup.example.com/myobj has:kind.mygroup.example.com/myobj Successful (Bmessage:kind.mygroup.example.com/myobj has:kind.mygroup.example.com/myobj kind.mygroup.example.com "myobj" deleted crd.sh:258: Successful get resources {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:32:26.842507 53075 controller.go:611] quota admission added evaluator for: foos.company.com foo.company.com/test created foo.company.com/second-instance created crd.sh:265: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:268: Successful get foo {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:269: Successful get foos.company.com {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:270: Successful get foos.v1.company.com {{range.items}}{{.metadata.name}}:{{end}}: test: (B+++ [0513 22:32:27] Testing CustomResource printing NAME AGE test 1s NAME AGE test 2s foo.company.com/test foo.company.com/test NAME AGE test 2s NAME AGE test 2s { "apiVersion": "v1", "items": [ { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "creationTimestamp": "2022-05-13T22:32:26Z", "generation": 1, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1652481144-7724", "resourceVersion": "1303", "uid": "4c6b7e19-68bb-4190-9426-9135b27f76cc" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "someField": "field1" } ], "kind": "List", "metadata": { "resourceVersion": "" } } { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "creationTimestamp": "2022-05-13T22:32:26Z", "generation": 1, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1652481144-7724", "resourceVersion": "1303", "uid": "4c6b7e19-68bb-4190-9426-9135b27f76cc" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "someField": "field1" } apiVersion: v1 items: - apiVersion: company.com/v1 kind: Foo metadata: creationTimestamp: "2022-05-13T22:32:26Z" generation: 1 labels: pruneGroup: "true" name: test namespace: namespace-1652481144-7724 resourceVersion: "1303" uid: 4c6b7e19-68bb-4190-9426-9135b27f76cc nestedField: otherSubfield: subfield2 someSubfield: subfield1 otherField: field2 someField: field1 kind: List metadata: resourceVersion: "" apiVersion: company.com/v1 kind: Foo metadata: creationTimestamp: "2022-05-13T22:32:26Z" generation: 1 labels: pruneGroup: "true" name: test namespace: namespace-1652481144-7724 resourceVersion: "1303" uid: 4c6b7e19-68bb-4190-9426-9135b27f76cc nestedField: otherSubfield: subfield2 someSubfield: subfield1 otherField: field2 someField: field1 field1field1field1field1Successful (Bmessage:foo.company.com/test has:foo.company.com/test +++ [0513 22:32:29] Testing CustomResource patching foo.company.com/test patched crd.sh:294: Successful get foos/test {{.patched}}: value1 (BFlag --record has been deprecated, --record will be removed in the future foo.company.com/test patched crd.sh:296: Successful get foos/test {{.patched}}: value2 (BFlag --record has been deprecated, --record will be removed in the future foo.company.com/test patched crd.sh:298: Successful get foos/test {{.patched}}: (B+++ [0513 22:32:29] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "annotations": { "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true" }, "creationTimestamp": "2022-05-13T22:32:26Z", "generation": 4, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1652481144-7724", "resourceVersion": "1310", "uid": "4c6b7e19-68bb-4190-9426-9135b27f76cc" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "patched": "value3", "someField": "field1" } Flag --record has been deprecated, --record will be removed in the future { "apiVersion": "company.com/v1", "kind": "Foo", "metadata": { "annotations": { "kubernetes.io/change-cause": "kubectl patch --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --filename=/tmp/tmp.Md4WNb5sHW/crd-foos-test.json --patch={\"patched\":\"value3\"} --type=merge --output=json" }, "creationTimestamp": "2022-05-13T22:32:26Z", "generation": 5, "labels": { "pruneGroup": "true" }, "name": "test", "namespace": "namespace-1652481144-7724", "resourceVersion": "1312", "uid": "4c6b7e19-68bb-4190-9426-9135b27f76cc" }, "nestedField": { "otherSubfield": "subfield2", "someSubfield": "subfield1" }, "otherField": "field2", "patched": "value3", "someField": "field1" } crd.sh:315: Successful get foos/test {{.patched}}: value3 (B+++ [0513 22:32:29] Testing CustomResource labeling foo.company.com/test labeled foo.company.com/test labeled foo.company.com/second-instance labeled foo.company.com/test labeled allnsLabel: "true" allnsLabel: "true" +++ [0513 22:32:30] Testing CustomResource annotating foo.company.com/test annotated foo.company.com/test annotated foo.company.com/second-instance annotated foo.company.com/test annotated allnsannotation: "true" allnsannotation: "true" +++ [0513 22:32:30] Testing CustomResource describing Name: test Namespace: namespace-1652481144-7724 Labels: allnsLabel=true itemlabel=true listlabel=true pruneGroup=true Annotations: allnsannotation: true itemannotation: true kubernetes.io/change-cause: kubectl patch --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --filename=/tmp/tm... listannotation: true API Version: company.com/v1 Kind: Foo Metadata: Creation Timestamp: 2022-05-13T22:32:26Z Generation: 5 Managed Fields: API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:labels: .: f:pruneGroup: f:nestedField: .: f:otherSubfield: f:someSubfield: f:otherField: f:someField: Manager: kubectl-create Operation: Update Time: 2022-05-13T22:32:26Z API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubernetes.io/change-cause: f:patched: Manager: kubectl-patch Operation: Update Time: 2022-05-13T22:32:29Z API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: f:allnsannotation: f:itemannotation: f:listannotation: Manager: kubectl-annotate Operation: Update Time: 2022-05-13T22:32:30Z API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:labels: f:allnsLabel: f:itemlabel: f:listlabel: Manager: kubectl-label Operation: Update Time: 2022-05-13T22:32:30Z Resource Version: 1321 UID: 4c6b7e19-68bb-4190-9426-9135b27f76cc Nested Field: Other Subfield: subfield2 Some Subfield: subfield1 Other Field: field2 Patched: value3 Some Field: field1 Events: Name: test Namespace: namespace-1652481144-7724 Labels: allnsLabel=true itemlabel=true listlabel=true pruneGroup=true Annotations: allnsannotation: true itemannotation: true kubernetes.io/change-cause: kubectl patch --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --record=true --filename=/tmp/tm... listannotation: true API Version: company.com/v1 Kind: Foo Metadata: Creation Timestamp: 2022-05-13T22:32:26Z Generation: 5 Managed Fields: API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:labels: .: f:pruneGroup: f:nestedField: .: f:otherSubfield: f:someSubfield: f:otherField: f:someField: Manager: kubectl-create Operation: Update Time: 2022-05-13T22:32:26Z API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubernetes.io/change-cause: f:patched: Manager: kubectl-patch Operation: Update Time: 2022-05-13T22:32:29Z API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: f:allnsannotation: f:itemannotation: f:listannotation: Manager: kubectl-annotate Operation: Update Time: 2022-05-13T22:32:30Z API Version: company.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:labels: f:allnsLabel: f:itemlabel: f:listlabel: Manager: kubectl-label Operation: Update Time: 2022-05-13T22:32:30Z Resource Version: 1321 UID: 4c6b7e19-68bb-4190-9426-9135b27f76cc Nested Field: Other Subfield: subfield2 Some Subfield: subfield1 Other Field: field2 Patched: value3 Some Field: field1 Events: listlabel=true itemlabel=true query for customresourcedefinitions had limit param query for events had limit param query for customresourcedefinitions had user-specified limit param Successful describe customresourcedefinitions verbose logs: I0513 22:32:30.960416 71708 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:30.967514 71708 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 6 milliseconds I0513 22:32:30.992747 71708 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?limit=500 200 OK in 1 milliseconds I0513 22:32:30.995767 71708 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/bars.company.com 200 OK in 1 milliseconds I0513 22:32:31.002355 71708 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dbars.company.com%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCustomResourceDefinition%2CinvolvedObject.uid%3Dff968768-c3c5-4908-bc6a-2055e4f01b79&limit=500 200 OK in 6 milliseconds I0513 22:32:31.004601 71708 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/foos.company.com 200 OK in 1 milliseconds I0513 22:32:31.008875 71708 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.uid%3D16d62c2a-d6f3-4e7b-9e93-8f2eed0d30ca%2CinvolvedObject.name%3Dfoos.company.com%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCustomResourceDefinition&limit=500 200 OK in 4 milliseconds I0513 22:32:31.013311 71708 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/resources.mygroup.example.com 200 OK in 3 milliseconds I0513 22:32:31.019730 71708 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dresources.mygroup.example.com%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCustomResourceDefinition%2CinvolvedObject.uid%3D47461154-5818-478d-8f8e-5a02bcd10fd2&limit=500 200 OK in 6 milliseconds I0513 22:32:31.021667 71708 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/validfoos.company.com 200 OK in 1 milliseconds I0513 22:32:31.029403 71708 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.uid%3Dbcf5986a-decd-4b29-9e49-b76a19398065%2CinvolvedObject.name%3Dvalidfoos.company.com%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCustomResourceDefinition&limit=500 200 OK in 7 milliseconds (Bquery for foos had limit param query for events had limit param query for foos had user-specified limit param Successful describe foos verbose logs: I0513 22:32:31.156844 71734 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:32:31.161579 71734 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:32:31.187207 71734 round_trippers.go:553] GET https://127.0.0.1:6443/apis/company.com/v1/namespaces/namespace-1652481144-7724/foos?limit=500 200 OK in 2 milliseconds I0513 22:32:31.189738 71734 round_trippers.go:553] GET https://127.0.0.1:6443/apis/company.com/v1/namespaces/namespace-1652481144-7724/foos/test 200 OK in 1 milliseconds I0513 22:32:31.191166 71734 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481144-7724/events?fieldSelector=involvedObject.name%3Dtest%2CinvolvedObject.namespace%3Dnamespace-1652481144-7724%2CinvolvedObject.kind%3DFoo%2CinvolvedObject.uid%3D4c6b7e19-68bb-4190-9426-9135b27f76cc&limit=500 200 OK in 1 milliseconds (Bfoo.company.com "test" deleted crd.sh:351: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:32:31.589259 53075 controller.go:611] quota admission added evaluator for: bars.company.com bar.company.com/test created crd.sh:357: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: test: (B+++ [0513 22:32:31] Testing CustomResource watching bar.company.com/test patched bar.company.com/test patched /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh: line 326: 71818 Killed while [ ${tries} -lt 10 ]; do tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1; done Successful (Bmessage:bar.company.com/test has:bar.company.com/test /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 363: 71817 Killed kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name bar.company.com "test" deleted W0513 22:32:39.972908 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:32:39.972959 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for bars.company.com W0513 22:32:39.972975 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:32:39.972987 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for resources.mygroup.example.com W0513 22:32:39.973001 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:32:39.973015 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for validfoos.company.com W0513 22:32:39.973029 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:32:39.973043 56663 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for foos.company.com I0513 22:32:39.973110 56663 shared_informer.go:255] Waiting for caches to sync for resource quota I0513 22:32:40.073895 56663 shared_informer.go:262] Caches are synced for resource quota I0513 22:32:40.393696 56663 shared_informer.go:255] Waiting for caches to sync for garbage collector I0513 22:32:40.393753 56663 shared_informer.go:262] Caches are synced for garbage collector crd.sh:389: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bfoo.company.com/test created crd.sh:395: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:398: Successful get foos/test {{.someField}}: field1 (Bfoo.company.com/test unchanged crd.sh:404: Successful get foos/test {{.someField}}: field1 (Bcrd.sh:407: Successful get foos/test {{.nestedField.someSubfield}}: subfield1 (Bfoo.company.com/test configured crd.sh:413: Successful get foos/test {{.nestedField.someSubfield}}: modifiedSubfield (Bcrd.sh:416: Successful get foos/test {{.nestedField.otherSubfield}}: subfield2 (Bfoo.company.com/test configured crd.sh:422: Successful get foos/test {{.nestedField.otherSubfield}}: (Bcrd.sh:425: Successful get foos/test {{.nestedField.newSubfield}}: (Bfoo.company.com/test configured crd.sh:431: Successful get foos/test {{.nestedField.newSubfield}}: subfield3 (Bfoo.company.com "test" deleted crd.sh:437: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bfoo.company.com/test-list created bar.company.com/test-list created crd.sh:443: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test-list: (Bcrd.sh:444: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: test-list: (Bcrd.sh:447: Successful get foos/test-list {{.someField}}: field1 (Bcrd.sh:448: Successful get bars/test-list {{.someField}}: field1 (Bfoo.company.com/test-list unchanged bar.company.com/test-list unchanged crd.sh:454: Successful get foos/test-list {{.someField}}: field1 (Bcrd.sh:455: Successful get bars/test-list {{.someField}}: field1 (Bcrd.sh:458: Successful get foos/test-list {{.someField}}: field1 (Bcrd.sh:459: Successful get bars/test-list {{.someField}}: field1 (Bfoo.company.com/test-list configured bar.company.com/test-list configured crd.sh:465: Successful get foos/test-list {{.someField}}: modifiedField (Bcrd.sh:466: Successful get bars/test-list {{.someField}}: modifiedField (Bcrd.sh:469: Successful get foos/test-list {{.otherField}}: field2 (Bcrd.sh:470: Successful get bars/test-list {{.otherField}}: field2 (Bfoo.company.com/test-list configured bar.company.com/test-list configured crd.sh:476: Successful get foos/test-list {{.otherField}}: (Bcrd.sh:477: Successful get bars/test-list {{.otherField}}: (Bcrd.sh:480: Successful get foos/test-list {{.newField}}: (Bcrd.sh:481: Successful get bars/test-list {{.newField}}: (Bfoo.company.com/test-list configured bar.company.com/test-list configured crd.sh:487: Successful get foos/test-list {{.newField}}: field3 (Bcrd.sh:488: Successful get bars/test-list {{.newField}}: field3 (Bfoo.company.com "test-list" deleted bar.company.com "test-list" deleted crd.sh:494: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:495: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:499: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:500: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bfoo.company.com/test created crd.sh:505: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test: (Bcrd.sh:506: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bbar.company.com/test created foo.company.com/test pruned crd.sh:511: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:512: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: test: (Bbar.company.com "test" deleted crd.sh:518: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: (Bcrd.sh:519: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: (Bnamespace/non-native-resources created bar.company.com/test created crd.sh:524: Successful get bars {{len .items}}: 1 (Bnamespace "non-native-resources" deleted crd.sh:527: Successful get bars {{len .items}}: 0 (BError from server (NotFound): namespaces "non-native-resources" not found customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted W0513 22:32:51.447612 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta W0513 22:32:51.447636 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.OwnerReference W0513 22:32:51.447642 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Status W0513 22:32:51.447648 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta W0513 22:32:51.447653 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions W0513 22:32:51.447659 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Time W0513 22:32:51.447665 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusDetails W0513 22:32:51.447672 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Preconditions W0513 22:32:51.447678 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.FieldsV1 W0513 22:32:51.447684 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.ManagedFieldsEntry W0513 22:32:51.447689 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.StatusCause W0513 22:32:51.447694 53075 merge.go:121] Should not happen: OpenAPI V3 merge schema conflict on io.k8s.apimachinery.pkg.apis.meta.v1.Patch customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted +++ exit code: 0 Recording: run_recursive_resources_tests Running command: run_recursive_resources_tests +++ Running case: test-cmd.run_recursive_resources_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_recursive_resources_tests +++ [0513 22:32:51] Testing recursive resources +++ [0513 22:32:51] Creating namespace namespace-1652481171-19557 namespace/namespace-1652481171-19557 created Context "test" modified. generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:pod/busybox0 created pod/busybox1 created error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false has:error validating data: kind not set generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox: (BSuccessful (Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing W0513 22:32:52.448267 53075 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured E0513 22:32:52.449383 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BW0513 22:32:52.535426 53075 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured E0513 22:32:52.536740 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:32:52.626696 53075 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured E0513 22:32:52.628246 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:32:52.742642 53075 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured E0513 22:32:52.744096 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced: (BSuccessful (Bmessage:pod/busybox0 replaced pod/busybox1 replaced error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false has:error validating data: kind not set generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:Name: busybox0 Namespace: namespace-1652481171-19557 Priority: 0 Node: Labels: app=busybox0 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: Name: busybox1 Namespace: namespace-1652481171-19557 Priority: 0 Node: Labels: app=busybox1 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:app=busybox0 Successful (Bmessage:Name: busybox0 Namespace: namespace-1652481171-19557 Priority: 0 Node: Labels: app=busybox0 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: Name: busybox1 Namespace: namespace-1652481171-19557 Priority: 0 Node: Labels: app=busybox1 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:app=busybox1 Successful (Bmessage:Name: busybox0 Namespace: namespace-1652481171-19557 Priority: 0 Node: Labels: app=busybox0 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: Name: busybox1 Namespace: namespace-1652481171-19557 Priority: 0 Node: Labels: app=busybox1 status=replaced Annotations: Status: Pending IP: IPs: Containers: busybox: Image: busybox Port: Host Port: Command: sleep 3600 Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BW0513 22:32:53.465481 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:53.465511 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:32:53.485605 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:53.485630 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue: (BSuccessful (Bmessage:pod/busybox0 annotated pod/busybox1 annotated error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced: (BSuccessful (Bmessage:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/busybox0 configured Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/busybox1 configured error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false has:error validating data: kind not set W0513 22:32:53.941865 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:53.941894 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:busybox0:busybox1: Successful (Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' W0513 22:32:54.261824 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:54.261856 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue: (BSuccessful (Bmessage:pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox: (BSuccessful (Bmessage:pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "busybox0" force deleted pod "busybox1" force deleted error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}' has:Object 'Kind' is missing generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/busybox0 created replicationcontroller/busybox1 created error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false I0513 22:32:55.078395 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-jmjz5" I0513 22:32:55.085130 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-r2ml4" generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BW0513 22:32:55.215061 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:55.215093 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80 (Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80 (BSuccessful (Bmessage:horizontalpodautoscaler.autoscaling/busybox0 autoscaled horizontalpodautoscaler.autoscaling/busybox1 autoscaled error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing horizontalpodautoscaler.autoscaling "busybox0" deleted horizontalpodautoscaler.autoscaling "busybox1" deleted generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1 (BI0513 22:32:56.107511 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481171-19557/busybox0" clusterIPs=map[IPv4:10.0.0.191] W0513 22:32:56.122777 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:56.122811 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:32:56.128914 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481171-19557/busybox1" clusterIPs=map[IPv4:10.0.0.120] I0513 22:32:56.156942 56663 namespace_controller.go:185] Namespace has been deleted non-native-resources generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (BSuccessful (Bmessage:service/busybox0 exposed service/busybox1 exposed error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BW0513 22:32:56.434789 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:56.434827 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1 (Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1 (BI0513 22:32:56.607972 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-qdnfw" I0513 22:32:56.621525 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-4r7tb" generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2 (Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2 (BSuccessful (Bmessage:replicationcontroller/busybox0 scaled replicationcontroller/busybox1 scaled error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BSuccessful (Bmessage:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. replicationcontroller "busybox0" force deleted replicationcontroller "busybox1" force deleted error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing W0513 22:32:57.071842 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:57.071873 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx1-deployment created deployment.apps/nginx0-deployment created error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false I0513 22:32:57.327105 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/nginx1-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-6f7f7cfd5f to 2" I0513 22:32:57.370995 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/nginx1-deployment-6f7f7cfd5f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-6f7f7cfd5f-tglnf" I0513 22:32:57.371836 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/nginx0-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-748ff4f766 to 2" I0513 22:32:57.386477 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/nginx0-deployment-748ff4f766" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-748ff4f766-rzchb" I0513 22:32:57.386614 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/nginx1-deployment-6f7f7cfd5f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-6f7f7cfd5f-vvvsg" I0513 22:32:57.405462 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/nginx0-deployment-748ff4f766" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-748ff4f766-8tqb7" generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment: (Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9: (Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9: (BSuccessful (Bmessage:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1) deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1) error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing deployment.apps/nginx1-deployment paused deployment.apps/nginx0-deployment paused generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true: (BSuccessful (Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing deployment.apps/nginx1-deployment resumed deployment.apps/nginx0-deployment resumed generic-resources.sh:384: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: :: (BSuccessful (Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Waiting for deployment "nginx1-deployment" rollout to finish Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing W0513 22:32:59.941979 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:32:59.942010 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:33:00.770167 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:00.770199 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Waiting for deployment "nginx0-deployment" rollout to finish Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Waiting for deployment "nginx1-deployment" rollout to finish Successful (Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available... Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available... timed out waiting for the condition timed out waiting for the condition unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing Successful (Bmessage:deployment.apps/nginx1-deployment REVISION CHANGE-CAUSE 1 deployment.apps/nginx0-deployment REVISION CHANGE-CAUSE 1 error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:nginx0-deployment Successful (Bmessage:deployment.apps/nginx1-deployment REVISION CHANGE-CAUSE 1 deployment.apps/nginx0-deployment REVISION CHANGE-CAUSE 1 error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:nginx1-deployment Successful (Bmessage:deployment.apps/nginx1-deployment REVISION CHANGE-CAUSE 1 deployment.apps/nginx0-deployment REVISION CHANGE-CAUSE 1 error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' has:Object 'Kind' is missing warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. deployment.apps "nginx1-deployment" force deleted deployment.apps "nginx0-deployment" force deleted error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}' W0513 22:33:01.475952 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:01.475984 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:33:01.499825 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:01.499856 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:411: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/busybox0 created replicationcontroller/busybox1 created error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false I0513 22:33:02.619852 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-wgkkz" I0513 22:33:02.632783 56663 event.go:294] "Event occurred" object="namespace-1652481171-19557/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-ms9vf" generic-resources.sh:415: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1: (BSuccessful (Bmessage:no rollbacker has been implemented for "ReplicationController" no rollbacker has been implemented for "ReplicationController" unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:no rollbacker has been implemented for "ReplicationController" Successful (Bmessage:no rollbacker has been implemented for "ReplicationController" no rollbacker has been implemented for "ReplicationController" unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' has:Object 'Kind' is missing Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" pausing is not supported error: replicationcontrollers "busybox1" pausing is not supported has:Object 'Kind' is missing Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" pausing is not supported error: replicationcontrollers "busybox1" pausing is not supported has:replicationcontrollers "busybox0" pausing is not supported Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" pausing is not supported error: replicationcontrollers "busybox1" pausing is not supported has:replicationcontrollers "busybox1" pausing is not supported Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" resuming is not supported error: replicationcontrollers "busybox1" resuming is not supported has:Object 'Kind' is missing Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" resuming is not supported error: replicationcontrollers "busybox1" resuming is not supported has:replicationcontrollers "busybox0" resuming is not supported Successful (Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' error: replicationcontrollers "busybox0" resuming is not supported error: replicationcontrollers "busybox1" resuming is not supported has:replicationcontrollers "busybox1" resuming is not supported warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. replicationcontroller "busybox0" force deleted replicationcontroller "busybox1" force deleted error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}' +++ exit code: 0 Recording: run_namespace_tests Running command: run_namespace_tests +++ Running case: test-cmd.run_namespace_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_namespace_tests +++ [0513 22:33:04] Testing kubectl(v1:namespaces) Successful (Bmessage:Error from server (NotFound): namespaces "my-namespace" not found has: not found namespace/my-namespace created (dry run) namespace/my-namespace created (server dry run) Successful (Bmessage:Error from server (NotFound): namespaces "my-namespace" not found has: not found namespace/my-namespace created core.sh:1471: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace (Bquery for namespaces had limit param query for resourcequotas had limit param query for limitranges had limit param query for namespaces had user-specified limit param Successful describe namespaces verbose logs: I0513 22:33:04.594601 74088 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:04.599494 74088 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:04.632104 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces?limit=500 200 OK in 3 milliseconds I0513 22:33:04.641605 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default 200 OK in 1 milliseconds I0513 22:33:04.643110 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.644484 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.646008 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-node-lease 200 OK in 1 milliseconds I0513 22:33:04.647308 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-node-lease/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.648448 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-node-lease/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.650046 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-public 200 OK in 1 milliseconds I0513 22:33:04.651259 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-public/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.652480 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-public/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.653914 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system 200 OK in 1 milliseconds I0513 22:33:04.655057 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.656074 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.657560 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/my-namespace 200 OK in 1 milliseconds I0513 22:33:04.658739 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/my-namespace/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.659899 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/my-namespace/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.661269 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480982-11192 200 OK in 1 milliseconds I0513 22:33:04.662595 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480982-11192/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.663787 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480982-11192/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.665144 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480982-23886 200 OK in 0 milliseconds I0513 22:33:04.666224 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480982-23886/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.667279 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480982-23886/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.668730 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480983-20082 200 OK in 1 milliseconds I0513 22:33:04.669909 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480983-20082/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.670994 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480983-20082/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.672514 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480985-26870 200 OK in 1 milliseconds I0513 22:33:04.673714 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480985-26870/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.674785 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480985-26870/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.676193 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480993-8368 200 OK in 1 milliseconds I0513 22:33:04.677221 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480993-8368/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.678446 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652480993-8368/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.680199 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481001-2375 200 OK in 1 milliseconds I0513 22:33:04.681350 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481001-2375/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.682613 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481001-2375/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.684096 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-22225 200 OK in 1 milliseconds I0513 22:33:04.685221 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-22225/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.686341 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-22225/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.687720 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-3273 200 OK in 1 milliseconds I0513 22:33:04.689670 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-3273/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.690693 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481005-3273/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.692216 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481008-14417 200 OK in 1 milliseconds I0513 22:33:04.693407 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481008-14417/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.694565 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481008-14417/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.696045 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481009-28266 200 OK in 1 milliseconds I0513 22:33:04.697314 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481009-28266/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.698674 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481009-28266/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.700382 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481009-4588 200 OK in 1 milliseconds I0513 22:33:04.701766 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481009-4588/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.703033 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481009-4588/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.704591 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481020-13892 200 OK in 1 milliseconds I0513 22:33:04.705720 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481020-13892/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.706739 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481020-13892/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.708715 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481020-21605 200 OK in 1 milliseconds I0513 22:33:04.709824 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481020-21605/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.710884 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481020-21605/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.712373 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481033-24918 200 OK in 1 milliseconds I0513 22:33:04.713534 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481033-24918/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.714554 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481033-24918/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.716047 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481033-5199 200 OK in 1 milliseconds I0513 22:33:04.717103 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481033-5199/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.718326 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481033-5199/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.719769 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481035-6580 200 OK in 1 milliseconds I0513 22:33:04.720815 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481035-6580/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.721867 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481035-6580/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.723301 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481036-9652 200 OK in 1 milliseconds I0513 22:33:04.724503 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481036-9652/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.725590 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481036-9652/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.726960 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481037-29023 200 OK in 1 milliseconds I0513 22:33:04.728144 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481037-29023/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.729196 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481037-29023/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.730530 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481039-18762 200 OK in 0 milliseconds I0513 22:33:04.731659 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481039-18762/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.732859 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481039-18762/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.734137 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481039-21952 200 OK in 0 milliseconds I0513 22:33:04.735199 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481039-21952/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.736291 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481039-21952/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.737714 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481089-3702 200 OK in 1 milliseconds I0513 22:33:04.738948 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481089-3702/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.740100 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481089-3702/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.741463 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481094-345 200 OK in 1 milliseconds I0513 22:33:04.742705 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481094-345/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.743734 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481094-345/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.745075 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481095-22575 200 OK in 1 milliseconds I0513 22:33:04.746160 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481095-22575/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.747194 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481095-22575/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.748603 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481096-1146 200 OK in 1 milliseconds I0513 22:33:04.749840 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481096-1146/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.750876 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481096-1146/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.752516 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481111-9703 200 OK in 1 milliseconds I0513 22:33:04.753653 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481111-9703/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.754749 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481111-9703/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.756103 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481118-5908 200 OK in 0 milliseconds I0513 22:33:04.757217 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481118-5908/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.758362 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481118-5908/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.759873 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481118-9708 200 OK in 1 milliseconds I0513 22:33:04.760982 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481118-9708/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.762021 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481118-9708/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.763720 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481128-17965 200 OK in 1 milliseconds I0513 22:33:04.764859 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481128-17965/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.765969 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481128-17965/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.767325 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481133-16194 200 OK in 1 milliseconds I0513 22:33:04.768482 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481133-16194/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.769565 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481133-16194/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.770938 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481134-32466 200 OK in 1 milliseconds I0513 22:33:04.772074 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481134-32466/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.773199 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481134-32466/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.774560 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481137-26432 200 OK in 0 milliseconds I0513 22:33:04.775601 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481137-26432/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.776665 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481137-26432/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.778156 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481139-15822 200 OK in 1 milliseconds I0513 22:33:04.779507 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481139-15822/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.780638 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481139-15822/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.782235 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481139-6998 200 OK in 1 milliseconds I0513 22:33:04.783447 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481139-6998/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.784578 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481139-6998/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.785974 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481140-19966 200 OK in 0 milliseconds I0513 22:33:04.787055 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481140-19966/resourcequotas?limit=500 200 OK in 0 milliseconds I0513 22:33:04.788192 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481140-19966/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.789564 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481142-24542 200 OK in 1 milliseconds I0513 22:33:04.790686 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481142-24542/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.791838 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481142-24542/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.794099 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481144-7724 200 OK in 1 milliseconds I0513 22:33:04.795240 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481144-7724/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.796276 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481144-7724/limitranges?limit=500 200 OK in 0 milliseconds I0513 22:33:04.797734 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481171-19557 200 OK in 1 milliseconds I0513 22:33:04.799072 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481171-19557/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.800286 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481171-19557/limitranges?limit=500 200 OK in 1 milliseconds I0513 22:33:04.801816 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb 200 OK in 1 milliseconds I0513 22:33:04.803000 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:04.804171 74088 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/nsb/limitranges?limit=500 200 OK in 1 milliseconds (Bnamespace "my-namespace" deleted W0513 22:33:08.457730 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:08.457762 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:33:09.058605 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:09.058659 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:33:10.079102 56663 shared_informer.go:255] Waiting for caches to sync for resource quota I0513 22:33:10.079141 56663 shared_informer.go:262] Caches are synced for resource quota W0513 22:33:10.139372 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:10.139406 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource namespace/my-namespace condition met Successful (Bmessage:Error from server (NotFound): namespaces "my-namespace" not found has: not found namespace/my-namespace created I0513 22:33:10.398841 56663 shared_informer.go:255] Waiting for caches to sync for garbage collector I0513 22:33:10.398915 56663 shared_informer.go:262] Caches are synced for garbage collector I0513 22:33:10.446666 56663 horizontal.go:360] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1652481171-19557 core.sh:1482: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace (BI0513 22:33:10.452847 56663 horizontal.go:360] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1652481171-19557 Successful (Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace namespace "kube-node-lease" deleted namespace "my-namespace" deleted namespace "namespace-1652480982-11192" deleted namespace "namespace-1652480982-23886" deleted namespace "namespace-1652480983-20082" deleted namespace "namespace-1652480985-26870" deleted namespace "namespace-1652480993-8368" deleted namespace "namespace-1652481001-2375" deleted namespace "namespace-1652481005-22225" deleted namespace "namespace-1652481005-3273" deleted namespace "namespace-1652481008-14417" deleted namespace "namespace-1652481009-28266" deleted namespace "namespace-1652481009-4588" deleted namespace "namespace-1652481020-13892" deleted namespace "namespace-1652481020-21605" deleted namespace "namespace-1652481033-24918" deleted namespace "namespace-1652481033-5199" deleted namespace "namespace-1652481035-6580" deleted namespace "namespace-1652481036-9652" deleted namespace "namespace-1652481037-29023" deleted namespace "namespace-1652481039-18762" deleted namespace "namespace-1652481039-21952" deleted namespace "namespace-1652481089-3702" deleted namespace "namespace-1652481094-345" deleted namespace "namespace-1652481095-22575" deleted namespace "namespace-1652481096-1146" deleted namespace "namespace-1652481111-9703" deleted namespace "namespace-1652481118-5908" deleted namespace "namespace-1652481118-9708" deleted namespace "namespace-1652481128-17965" deleted namespace "namespace-1652481133-16194" deleted namespace "namespace-1652481134-32466" deleted namespace "namespace-1652481137-26432" deleted namespace "namespace-1652481139-15822" deleted namespace "namespace-1652481139-6998" deleted namespace "namespace-1652481140-19966" deleted namespace "namespace-1652481142-24542" deleted namespace "namespace-1652481144-7724" deleted namespace "namespace-1652481171-19557" deleted namespace "nsb" deleted Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted has:warning: deleting cluster-scoped resources Successful (Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace namespace "kube-node-lease" deleted namespace "my-namespace" deleted namespace "namespace-1652480982-11192" deleted namespace "namespace-1652480982-23886" deleted namespace "namespace-1652480983-20082" deleted namespace "namespace-1652480985-26870" deleted namespace "namespace-1652480993-8368" deleted namespace "namespace-1652481001-2375" deleted namespace "namespace-1652481005-22225" deleted namespace "namespace-1652481005-3273" deleted namespace "namespace-1652481008-14417" deleted namespace "namespace-1652481009-28266" deleted namespace "namespace-1652481009-4588" deleted namespace "namespace-1652481020-13892" deleted namespace "namespace-1652481020-21605" deleted namespace "namespace-1652481033-24918" deleted namespace "namespace-1652481033-5199" deleted namespace "namespace-1652481035-6580" deleted namespace "namespace-1652481036-9652" deleted namespace "namespace-1652481037-29023" deleted namespace "namespace-1652481039-18762" deleted namespace "namespace-1652481039-21952" deleted namespace "namespace-1652481089-3702" deleted namespace "namespace-1652481094-345" deleted namespace "namespace-1652481095-22575" deleted namespace "namespace-1652481096-1146" deleted namespace "namespace-1652481111-9703" deleted namespace "namespace-1652481118-5908" deleted namespace "namespace-1652481118-9708" deleted namespace "namespace-1652481128-17965" deleted namespace "namespace-1652481133-16194" deleted namespace "namespace-1652481134-32466" deleted namespace "namespace-1652481137-26432" deleted namespace "namespace-1652481139-15822" deleted namespace "namespace-1652481139-6998" deleted namespace "namespace-1652481140-19966" deleted namespace "namespace-1652481142-24542" deleted namespace "namespace-1652481144-7724" deleted namespace "namespace-1652481171-19557" deleted namespace "nsb" deleted Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted has:namespace "my-namespace" deleted namespace/quotas created core.sh:1489: Successful get namespaces/quotas {{.metadata.name}}: quotas (Bcore.sh:1490: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: : (Bresourcequota/test-quota created (dry run) resourcequota/test-quota created (server dry run) core.sh:1494: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: : (Bresourcequota/test-quota created core.sh:1497: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found: (Bquery for resourcequotas had limit param query for resourcequotas had user-specified limit param Successful describe resourcequotas verbose logs: I0513 22:33:11.492273 74290 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:11.496926 74290 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:11.517493 74290 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas?limit=500 200 OK in 1 milliseconds I0513 22:33:11.519646 74290 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds (BI0513 22:33:11.645366 56663 resource_quota_controller.go:311] Resource quota has been deleted quotas/test-quota resourcequota "test-quota" deleted namespace "quotas" deleted W0513 22:33:12.931517 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:12.931544 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource core.sh:1511: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: : (Bnamespace/other created core.sh:1515: Successful get namespaces/other {{.metadata.name}}: other (Bcore.sh:1519: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created core.sh:1523: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bcore.sh:1525: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:error: a resource cannot be retrieved by name across all namespaces has:a resource cannot be retrieved by name across all namespaces core.sh:1532: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted core.sh:1536: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: (Bnamespace "other" deleted I0513 22:33:20.325164 56663 namespace_controller.go:185] Namespace has been deleted my-namespace I0513 22:33:20.748370 56663 namespace_controller.go:185] Namespace has been deleted kube-node-lease I0513 22:33:20.828879 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652480982-11192 I0513 22:33:20.941635 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652480982-23886 I0513 22:33:20.941669 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652480983-20082 I0513 22:33:20.951963 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652480993-8368 I0513 22:33:20.951985 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652480985-26870 I0513 22:33:20.962347 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481005-22225 I0513 22:33:20.962377 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481001-2375 I0513 22:33:20.980719 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481008-14417 I0513 22:33:21.007638 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481005-3273 I0513 22:33:21.188990 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481009-28266 I0513 22:33:21.349981 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481009-4588 I0513 22:33:21.444618 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481020-13892 I0513 22:33:21.475202 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481033-5199 I0513 22:33:21.475233 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481033-24918 I0513 22:33:21.488603 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481036-9652 I0513 22:33:21.497894 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481035-6580 I0513 22:33:21.520265 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481020-21605 I0513 22:33:21.574869 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481037-29023 I0513 22:33:21.663547 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481039-18762 I0513 22:33:21.705310 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481039-21952 I0513 22:33:21.804709 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481089-3702 I0513 22:33:21.972883 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481095-22575 I0513 22:33:21.979115 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481094-345 I0513 22:33:22.022654 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481111-9703 I0513 22:33:22.030516 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481118-9708 I0513 22:33:22.035918 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481118-5908 I0513 22:33:22.097510 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481128-17965 I0513 22:33:22.139941 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481133-16194 I0513 22:33:22.149177 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481096-1146 I0513 22:33:22.287174 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481134-32466 I0513 22:33:22.555052 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481139-15822 I0513 22:33:22.566233 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481139-6998 I0513 22:33:22.571448 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481144-7724 I0513 22:33:22.638069 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481137-26432 I0513 22:33:22.653272 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481140-19966 I0513 22:33:22.669465 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481142-24542 I0513 22:33:22.702228 56663 namespace_controller.go:185] Namespace has been deleted nsb I0513 22:33:22.741746 56663 namespace_controller.go:185] Namespace has been deleted quotas I0513 22:33:22.768048 56663 namespace_controller.go:185] Namespace has been deleted namespace-1652481171-19557 +++ exit code: 0 Recording: run_secrets_test Running command: run_secrets_test +++ Running case: test-cmd.run_secrets_test +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_secrets_test +++ [0513 22:33:23] Creating namespace namespace-1652481203-31996 namespace/namespace-1652481203-31996 created Context "test" modified. +++ [0513 22:33:23] Testing secrets I0513 22:33:23.985191 74586 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has:kind: Secret Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has:apiVersion: v1 Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has:key1: dmFsdWUx Successful (Bmessage:apiVersion: v1 data: key1: dmFsdWUx kind: Secret metadata: creationTimestamp: null name: test has not:example.com core.sh:831: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: : (Bnamespace/test-secrets created core.sh:835: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets (Bcore.sh:839: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:843: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:844: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type (Bquery for secrets had limit param query for secrets had user-specified limit param Successful describe secrets verbose logs: I0513 22:33:24.549368 74710 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:24.553988 74710 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:24.575030 74710 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-secrets/secrets?limit=500 200 OK in 1 milliseconds I0513 22:33:24.576837 74710 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-secrets/secrets/test-secret 200 OK in 1 milliseconds (Bsecret "test-secret" deleted core.sh:856: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:860: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:861: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson (Bsecret "test-secret" deleted core.sh:871: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created core.sh:875: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:876: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson (Bsecret "test-secret" deleted core.sh:886: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret/test-secret created W0513 22:33:25.664405 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:25.664441 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource core.sh:889: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:890: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls (Bsecret "test-secret" deleted secret/test-secret created core.sh:896: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret (Bcore.sh:897: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls (Bsecret "test-secret" deleted secret/secret-string-data created core.sh:919: Successful get secret/secret-string-data --namespace=test-secrets {{.data}}: map[k1:djE= k2:djI=] (Bcore.sh:920: Successful get secret/secret-string-data --namespace=test-secrets {{.data}}: map[k1:djE= k2:djI=] (Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets {{.stringData}}: (Bsecret "secret-string-data" deleted core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: (Bsecret "test-secret" deleted namespace "test-secrets" deleted W0513 22:33:28.560503 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:28.560534 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:33:28.772507 56663 namespace_controller.go:185] Namespace has been deleted other W0513 22:33:30.306397 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:30.306425 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource +++ exit code: 0 Recording: run_configmap_tests Running command: run_configmap_tests +++ Running case: test-cmd.run_configmap_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_configmap_tests +++ [0513 22:33:31] Creating namespace namespace-1652481211-25049 namespace/namespace-1652481211-25049 created Context "test" modified. +++ [0513 22:33:32] Testing configmaps configmap/test-configmap created core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap (Bconfigmap "test-configmap" deleted core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: : (Bnamespace/test-configmaps created core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps (Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: : (Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: : (Bconfigmap/test-configmap created (dry run) configmap/test-configmap created (server dry run) core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: : (Bconfigmap/test-configmap created configmap/test-binary-configmap created core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap (Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap (Bquery for configmaps had limit param query for events had limit param query for configmaps had user-specified limit param Successful describe configmaps verbose logs: I0513 22:33:33.403840 75453 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:33.408395 75453 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:33.429524 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps?limit=500 200 OK in 1 milliseconds I0513 22:33:33.431778 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/kube-root-ca.crt 200 OK in 1 milliseconds I0513 22:33:33.433141 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D13ea98fd-6240-4a80-aadb-2b1f81bdfc10%2CinvolvedObject.name%3Dkube-root-ca.crt%2CinvolvedObject.namespace%3Dtest-configmaps&limit=500 200 OK in 1 milliseconds I0513 22:33:33.434814 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-binary-configmap 200 OK in 1 milliseconds I0513 22:33:33.436069 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D509ed762-c9c5-43d1-a98c-71030644e48a%2CinvolvedObject.name%3Dtest-binary-configmap&limit=500 200 OK in 1 milliseconds I0513 22:33:33.437452 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-configmap 200 OK in 0 milliseconds I0513 22:33:33.438649 75453 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.uid%3D28bddf62-518a-4154-a847-a67552860efd%2CinvolvedObject.name%3Dtest-configmap%2CinvolvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap&limit=500 200 OK in 1 milliseconds (Bconfigmap "test-configmap" deleted configmap "test-binary-configmap" deleted namespace "test-configmaps" deleted W0513 22:33:36.036371 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:36.036403 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:33:36.895500 56663 namespace_controller.go:185] Namespace has been deleted test-secrets +++ exit code: 0 Recording: run_client_config_tests Running command: run_client_config_tests +++ Running case: test-cmd.run_client_config_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_client_config_tests +++ [0513 22:33:38] Creating namespace namespace-1652481218-29646 namespace/namespace-1652481218-29646 created Context "test" modified. +++ [0513 22:33:38] Testing client config Successful (Bmessage:error: stat missing: no such file or directory has:missing: no such file or directory Successful (Bmessage:error: stat missing: no such file or directory has:missing: no such file or directory Successful (Bmessage:error: stat missing: no such file or directory has:missing: no such file or directory Successful (Bmessage:Error in configuration: context was not found for specified context: missing-context has:context was not found for specified context: missing-context Successful (Bmessage:error: no server found for cluster "missing-cluster" has:no server found for cluster "missing-cluster" Successful (Bmessage:error: auth info "missing-user" does not exist has:auth info "missing-user" does not exist Successful (Bmessage:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50" has:error loading config file Successful (Bmessage:error: stat missing-config: no such file or directory has:no such file or directory +++ exit code: 0 Recording: run_service_accounts_tests Running command: run_service_accounts_tests +++ Running case: test-cmd.run_service_accounts_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_service_accounts_tests +++ [0513 22:33:39] Creating namespace namespace-1652481219-19966 namespace/namespace-1652481219-19966 created Context "test" modified. +++ [0513 22:33:39] Testing service accounts core.sh:951: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: : (Bnamespace/test-service-accounts created core.sh:955: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts (Bcore.sh:959: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name \"test-service-account\" }}found{{end}}{{end}}:: : (Bserviceaccount/test-service-account created (dry run) serviceaccount/test-service-account created (server dry run) core.sh:963: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name \"test-service-account\" }}found{{end}}{{end}}:: : (Bserviceaccount/test-service-account created core.sh:967: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account (Bquery for serviceaccounts had limit param query for secrets had limit param query for events had limit param query for serviceaccounts had user-specified limit param Successful describe serviceaccounts verbose logs: I0513 22:33:40.187785 75901 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:40.193397 75901 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0513 22:33:40.216475 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts?limit=500 200 OK in 2 milliseconds I0513 22:33:40.219087 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts/default 200 OK in 1 milliseconds I0513 22:33:40.220625 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/secrets?limit=500 200 OK in 1 milliseconds I0513 22:33:40.222344 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.uid%3Dc16322dc-0d78-4d0f-a945-1058ffa87556%2CinvolvedObject.name%3Ddefault%2CinvolvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount&limit=500 200 OK in 1 milliseconds I0513 22:33:40.224502 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts/test-service-account 200 OK in 1 milliseconds I0513 22:33:40.226002 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/secrets?limit=500 200 OK in 1 milliseconds I0513 22:33:40.227449 75901 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.name%3Dtest-service-account%2CinvolvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount%2CinvolvedObject.uid%3De82d86ab-43cd-4870-bc0e-453925d0208b&limit=500 200 OK in 1 milliseconds (Bserviceaccount "test-service-account" deleted namespace "test-service-accounts" deleted I0513 22:33:43.790043 56663 namespace_controller.go:185] Namespace has been deleted test-configmaps +++ exit code: 0 Recording: run_job_tests Running command: run_job_tests +++ Running case: test-cmd.run_job_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_job_tests +++ [0513 22:33:45] Creating namespace namespace-1652481225-14148 namespace/namespace-1652481225-14148 created Context "test" modified. +++ [0513 22:33:45] Testing job batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: : (Bnamespace/test-jobs created batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs (Bbatch.sh:37: Successful get cronjob --namespace=test-jobs {{range.items}}{{ if eq .metadata.name \"pi\" }}found{{end}}{{end}}:: : (Bcronjob.batch/pi created (dry run) I0513 22:33:46.142928 53075 controller.go:611] quota admission added evaluator for: cronjobs.batch cronjob.batch/pi created (server dry run) batch.sh:41: Successful get cronjob {{range.items}}{{ if eq .metadata.name \"pi\" }}found{{end}}{{end}}:: : (BI0513 22:33:46.280481 56663 event.go:294] "Event occurred" object="test-jobs/pi" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Warning" reason="InvalidSchedule" message="invalid schedule: 59 23 31 2 * : time difference between two schedules less than 1 second" cronjob.batch/pi created batch.sh:45: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi (BNAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE pi 59 23 31 2 * False 0 0s Name: pi Namespace: test-jobs Labels: Annotations: Schedule: 59 23 31 2 * Concurrency Policy: Allow Suspend: False Successful Job History Limit: 3 Failed Job History Limit: 1 Starting Deadline Seconds: Selector: Parallelism: Completions: Pod Template: Labels: Containers: pi: Image: k8s.gcr.io/perl Port: Host Port: Command: perl -Mbignum=bpi -wle print bpi(20) -s https://127.0.0.1:6443 --insecure-skip-tls-verify --match-server-version Environment: Mounts: Volumes: Last Schedule Time: Active Jobs: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning InvalidSchedule 0s cronjob-controller invalid schedule: 59 23 31 2 * : time difference between two schedules less than 1 second query for cronjobs had limit param query for events had limit param query for cronjobs had user-specified limit param Successful describe cronjobs verbose logs: I0513 22:33:46.497161 76168 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:46.501845 76168 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:46.525689 76168 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/cronjobs?limit=500 200 OK in 1 milliseconds I0513 22:33:46.527866 76168 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/cronjobs/pi 200 OK in 1 milliseconds I0513 22:33:46.531988 76168 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-jobs/events?fieldSelector=involvedObject.kind%3DCronJob%2CinvolvedObject.uid%3D64ddc5ae-3735-420c-9506-cec88dc85c91%2CinvolvedObject.name%3Dpi%2CinvolvedObject.namespace%3Dtest-jobs&limit=500 200 OK in 1 milliseconds (BW0513 22:33:46.650141 76194 helpers.go:650] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client. Successful (Bmessage:job.batch/test-job has:job.batch/test-job batch.sh:56: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: (Bbatch.sh:59: Successful get job --namespace=test-jobs {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: : (Bjob.batch/test-job created (dry run) I0513 22:33:46.993257 53075 controller.go:611] quota admission added evaluator for: jobs.batch job.batch/test-job created (server dry run) batch.sh:63: Successful get job --namespace=test-jobs {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: : (Bjob.batch/test-job created I0513 22:33:47.122487 56663 job_controller.go:498] enqueueing job test-jobs/test-job I0513 22:33:47.129855 56663 event.go:294] "Event occurred" object="test-jobs/test-job" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-f7cm9" I0513 22:33:47.129870 56663 job_controller.go:498] enqueueing job test-jobs/test-job I0513 22:33:47.137005 56663 job_controller.go:498] enqueueing job test-jobs/test-job batch.sh:67: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job (BNAME COMPLETIONS DURATION AGE test-job 0/1 0s 0s Name: test-job Namespace: test-jobs Selector: controller-uid=788ef2e1-0ec5-48d3-a291-f9ce6d957883 Labels: controller-uid=788ef2e1-0ec5-48d3-a291-f9ce6d957883 job-name=test-job Annotations: cronjob.kubernetes.io/instantiate: manual Parallelism: 1 Completions: 1 Completion Mode: NonIndexed Start Time: Fri, 13 May 2022 22:33:47 +0000 Pods Statuses: 1 Active (0 Ready) / 0 Succeeded / 0 Failed Pod Template: Labels: controller-uid=788ef2e1-0ec5-48d3-a291-f9ce6d957883 job-name=test-job Containers: pi: Image: k8s.gcr.io/perl Port: Host Port: Command: perl -Mbignum=bpi -wle print bpi(20) -s https://127.0.0.1:6443 --insecure-skip-tls-verify --match-server-version Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s job-controller Created pod: test-job-f7cm9 query for jobs had limit param query for events had limit param query for jobs had user-specified limit param Successful describe jobs verbose logs: I0513 22:33:47.354973 76321 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:47.360012 76321 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:47.383051 76321 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/jobs?limit=500 200 OK in 1 milliseconds I0513 22:33:47.385343 76321 round_trippers.go:553] GET https://127.0.0.1:6443/apis/batch/v1/namespaces/test-jobs/jobs/test-job 200 OK in 1 milliseconds I0513 22:33:47.388469 76321 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-jobs/events?fieldSelector=involvedObject.name%3Dtest-job%2CinvolvedObject.namespace%3Dtest-jobs%2CinvolvedObject.kind%3DJob%2CinvolvedObject.uid%3D788ef2e1-0ec5-48d3-a291-f9ce6d957883&limit=500 200 OK in 1 milliseconds (BI0513 22:33:47.520443 56663 job_controller.go:498] enqueueing job test-jobs/test-job E0513 22:33:47.520589 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object test-jobs/test-job: could not find key for obj \"test-jobs/test-job\"" job="test-jobs/test-job" job.batch "test-job" deleted cronjob.batch "pi" deleted namespace "test-jobs" deleted E0513 22:33:48.130174 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object test-jobs/test-job: could not find key for obj \"test-jobs/test-job\"" job="test-jobs/test-job" I0513 22:33:50.529004 56663 namespace_controller.go:185] Namespace has been deleted test-service-accounts +++ exit code: 0 Recording: run_create_job_tests Running command: run_create_job_tests +++ Running case: test-cmd.run_create_job_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_create_job_tests +++ [0513 22:33:52] Creating namespace namespace-1652481232-21406 namespace/namespace-1652481232-21406 created Context "test" modified. I0513 22:33:52.997849 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job job.batch/test-job created I0513 22:33:53.003790 56663 event.go:294] "Event occurred" object="namespace-1652481232-21406/test-job" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-5l2km" I0513 22:33:53.003811 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job I0513 22:33:53.011343 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job create.sh:94: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd (Bjob.batch "test-job" deleted I0513 22:33:53.152893 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job E0513 22:33:53.153103 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1652481232-21406/test-job: could not find key for obj \"namespace-1652481232-21406/test-job\"" job="namespace-1652481232-21406/test-job" I0513 22:33:53.208053 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job-pi job.batch/test-job-pi created I0513 22:33:53.238776 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job-pi I0513 22:33:53.238797 56663 event.go:294] "Event occurred" object="namespace-1652481232-21406/test-job-pi" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-pi-rtldk" I0513 22:33:53.247641 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job-pi create.sh:100: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl (Bjob.batch "test-job-pi" deleted I0513 22:33:53.338752 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/test-job-pi E0513 22:33:53.338883 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1652481232-21406/test-job-pi: could not find key for obj \"namespace-1652481232-21406/test-job-pi\"" job="namespace-1652481232-21406/test-job-pi" cronjob.batch/test-pi created I0513 22:33:53.489013 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/my-pi job.batch/my-pi created I0513 22:33:53.498783 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/my-pi I0513 22:33:53.498835 56663 event.go:294] "Event occurred" object="namespace-1652481232-21406/my-pi" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-pi-w9cpc" I0513 22:33:53.506274 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/my-pi Successful (Bmessage:[perl -Mbignum=bpi -wle print bpi(10)] has:perl -Mbignum=bpi -wle print bpi(10) job.batch "my-pi" deleted I0513 22:33:53.628910 56663 job_controller.go:498] enqueueing job namespace-1652481232-21406/my-pi E0513 22:33:53.629115 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1652481232-21406/my-pi: could not find key for obj \"namespace-1652481232-21406/my-pi\"" job="namespace-1652481232-21406/my-pi" cronjob.batch "test-pi" deleted +++ exit code: 0 Recording: run_pod_templates_tests Running command: run_pod_templates_tests +++ Running case: test-cmd.run_pod_templates_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_pod_templates_tests +++ [0513 22:33:53] Creating namespace namespace-1652481233-14788 namespace/namespace-1652481233-14788 created Context "test" modified. +++ [0513 22:33:53] Testing pod templates core.sh:1598: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: (BE0513 22:33:54.004269 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1652481232-21406/test-job: could not find key for obj \"namespace-1652481232-21406/test-job\"" job="namespace-1652481232-21406/test-job" I0513 22:33:54.105424 53075 controller.go:611] quota admission added evaluator for: podtemplates podtemplate/nginx created core.sh:1602: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx: (BE0513 22:33:54.238936 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1652481232-21406/test-job-pi: could not find key for obj \"namespace-1652481232-21406/test-job-pi\"" job="namespace-1652481232-21406/test-job-pi" NAME CONTAINERS IMAGES POD LABELS nginx nginx nginx name=nginx core.sh:1610: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx: (BW0513 22:33:54.485624 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:54.485657 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:54.499885 56663 tracking_utils.go:109] "deleting tracking annotation UID expectations" err="couldn't create key for object namespace-1652481232-21406/my-pi: could not find key for obj \"namespace-1652481232-21406/my-pi\"" job="namespace-1652481232-21406/my-pi" query for podtemplates had limit param query for events had limit param query for podtemplates had user-specified limit param Successful describe podtemplates verbose logs: I0513 22:33:54.472887 76745 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:54.479350 76745 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 6 milliseconds I0513 22:33:54.503608 76745 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481233-14788/podtemplates?limit=500 200 OK in 1 milliseconds I0513 22:33:54.506091 76745 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481233-14788/podtemplates/nginx 200 OK in 1 milliseconds I0513 22:33:54.507649 76745 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481233-14788/events?fieldSelector=involvedObject.name%3Dnginx%2CinvolvedObject.namespace%3Dnamespace-1652481233-14788%2CinvolvedObject.kind%3DPodTemplate%2CinvolvedObject.uid%3Dad8c109a-ebea-4691-af0f-00bacceb5e6c&limit=500 200 OK in 1 milliseconds (Bpodtemplate "nginx" deleted core.sh:1616: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_service_tests Running command: run_service_tests +++ Running case: test-cmd.run_service_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_service_tests Context "test" modified. +++ [0513 22:33:54] Testing kubectl(v1:services) core.sh:989: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0513 22:33:55.102557 53075 alloc.go:327] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.172] service/redis-master created core.sh:993: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (Bmatched Name: matched Labels: matched Selector: matched IP: matched Port: matched Endpoints: matched Session Affinity: core.sh:995: Successful describe services redis-master: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (Bcore.sh:997: Successful describe Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (B core.sh:999: Successful describe Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None (B core.sh:1001: Successful describe Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (B matched Name: matched Labels: matched Selector: matched IP: matched Port: matched Endpoints: matched Session Affinity: Successful describe services: Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.34.203.8:6443 Session Affinity: None Events: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (BSuccessful describe Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.34.203.8:6443 Session Affinity: None Events: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (BSuccessful describe Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.34.203.8:6443 Session Affinity: None Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None (BSuccessful describe Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: Selector: Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.1 IPs: 10.0.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.34.203.8:6443 Session Affinity: None Events: Name: redis-master Namespace: default Labels: app=redis role=master tier=backend Annotations: Selector: app=redis,role=master,tier=backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.172 IPs: 10.0.0.172 Port: 6379/TCP TargetPort: 6379/TCP Endpoints: Session Affinity: None Events: (Bquery for services had limit param query for events had limit param query for services had user-specified limit param Successful describe services verbose logs: I0513 22:33:55.874454 77032 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:33:55.879115 77032 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:33:55.900538 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services?limit=500 200 OK in 1 milliseconds I0513 22:33:55.902743 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services/kubernetes 200 OK in 1 milliseconds I0513 22:33:55.904272 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes 200 OK in 1 milliseconds I0513 22:33:55.905738 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/events?fieldSelector=involvedObject.namespace%3Ddefault%2CinvolvedObject.kind%3DService%2CinvolvedObject.uid%3D680f10de-faed-487f-95de-328c30c0d4df%2CinvolvedObject.name%3Dkubernetes&limit=500 200 OK in 1 milliseconds I0513 22:33:55.907615 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/services/redis-master 200 OK in 1 milliseconds I0513 22:33:55.908835 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/redis-master 200 OK in 1 milliseconds I0513 22:33:55.910044 77032 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/events?fieldSelector=involvedObject.namespace%3Ddefault%2CinvolvedObject.kind%3DService%2CinvolvedObject.uid%3D54d0db5e-236a-420a-980c-69063306e120%2CinvolvedObject.name%3Dredis-master&limit=500 200 OK in 1 milliseconds (Bcore.sh:1015: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend: (BapiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: redis role: master tier: backend name: redis-master spec: ports: - port: 6379 targetPort: 6379 selector: role: padawan status: loadBalancer: {} apiVersion: v1 kind: Service metadata: creationTimestamp: "2022-05-13T22:33:55Z" labels: app: redis role: master tier: backend name: redis-master namespace: default resourceVersion: "1994" uid: 54d0db5e-236a-420a-980c-69063306e120 spec: clusterIP: 10.0.0.172 clusterIPs: - 10.0.0.172 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 6379 protocol: TCP targetPort: 6379 selector: role: padawan sessionAffinity: None type: ClusterIP status: loadBalancer: {} service/redis-master selector updated core.sh:1023: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan: (Bservice/redis-master selector updated core.sh:1027: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend: (BapiVersion: v1 kind: Service metadata: creationTimestamp: "2022-05-13T22:33:55Z" labels: app: redis role: master tier: backend name: redis-master namespace: default resourceVersion: "1999" uid: 54d0db5e-236a-420a-980c-69063306e120 spec: clusterIP: 10.0.0.172 clusterIPs: - 10.0.0.172 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 6379 protocol: TCP targetPort: 6379 selector: role: padawan sessionAffinity: None type: ClusterIP status: loadBalancer: {} apiVersion: v1 kind: Service metadata: creationTimestamp: "2022-05-13T22:33:55Z" labels: app: redis role: master tier: backend name: redis-master namespace: default resourceVersion: "1999" uid: 54d0db5e-236a-420a-980c-69063306e120 spec: clusterIP: 10.0.0.172 clusterIPs: - 10.0.0.172 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 6379 protocol: TCP targetPort: 6379 selector: role: padawan sessionAffinity: None type: ClusterIP status: loadBalancer: {} Successful (Bmessage:kubectl-create kubectl-set has:kubectl-set error: you must specify resources by --filename when --local is set. Example resource specifications include: '-f rsrc.yaml' '--filename=rsrc.json' core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend: (Bservice/redis-master selector updated Successful (Bmessage:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again has:Conflict core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (Bservice "redis-master" deleted core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0513 22:33:57.647930 53075 alloc.go:327] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.172] service/redis-master created core.sh:1062: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (BI0513 22:33:57.766487 56663 namespace_controller.go:185] Namespace has been deleted test-jobs core.sh:1066: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master: (BI0513 22:33:57.956105 53075 alloc.go:327] "allocated clusterIPs" service="default/service-v1-test" clusterIPs=map[IPv4:10.0.0.125] service/service-v1-test created core.sh:1087: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test: (Bservice/service-v1-test replaced core.sh:1094: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test: (Bservice "redis-master" deleted service "service-v1-test" deleted core.sh:1102: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1106: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BW0513 22:33:58.681836 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:33:58.681864 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:33:58.757306 53075 alloc.go:327] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.154] service/redis-master created I0513 22:33:58.932278 53075 alloc.go:327] "allocated clusterIPs" service="default/redis-slave" clusterIPs=map[IPv4:10.0.0.113] service/redis-slave created core.sh:1111: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave: (BSuccessful (Bmessage:NAME RSRC kubernetes 192 redis-master 2020 redis-slave 2024 has:redis-master core.sh:1121: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave: (Bservice "redis-master" deleted service "redis-slave" deleted core.sh:1128: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1132: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bservice/beep-boop created (dry run) service/beep-boop created (server dry run) core.sh:1136: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bservice/beep-boop created core.sh:1140: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes: (Bcore.sh:1144: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes: (Bservice "beep-boop" deleted core.sh:1151: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bcore.sh:1155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1157: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (Bservice/testmetadata created (dry run) pod/testmetadata created (dry run) service/testmetadata created (server dry run) pod/testmetadata created (server dry run) core.sh:1162: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes: (BI0513 22:34:00.404578 53075 alloc.go:327] "allocated clusterIPs" service="default/testmetadata" clusterIPs=map[IPv4:10.0.0.65] service/testmetadata created pod/testmetadata created core.sh:1166: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: testmetadata: (Bcore.sh:1167: Successful get service testmetadata {{(index .spec.ports 0).port}}: 80 (BSuccessful (Bmessage:kubectl-run has:kubectl-run I0513 22:34:00.683379 53075 alloc.go:327] "allocated clusterIPs" service="default/exposemetadata" clusterIPs=map[IPv4:10.0.0.87] service/exposemetadata exposed core.sh:1176: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work] (BSuccessful (Bmessage:kubectl-expose has:kubectl-expose service "exposemetadata" deleted service "testmetadata" deleted pod "testmetadata" deleted +++ exit code: 0 Recording: run_daemonset_tests Running command: run_daemonset_tests +++ Running case: test-cmd.run_daemonset_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_daemonset_tests +++ [0513 22:34:01] Creating namespace namespace-1652481241-859 namespace/namespace-1652481241-859 created Context "test" modified. +++ [0513 22:34:01] Testing kubectl(v1:daemonsets) apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:01.488639 53075 controller.go:611] quota admission added evaluator for: daemonsets.apps daemonset.apps/bind created I0513 22:34:01.493134 53075 controller.go:611] quota admission added evaluator for: controllerrevisions.apps apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1 (Bdaemonset.apps/bind configured apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1 (Bdaemonset.apps/bind image updated apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2 (Bdaemonset.apps/bind env updated apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3 (Bdaemonset.apps/bind resource requirements updated apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4 (BSuccessful (Bmessage:kube-controller-manager kubectl-client-side-apply kubectl-set has:kubectl-set query for daemonsets had limit param query for pods had limit param query for events had limit param query for daemonsets had user-specified limit param Successful describe daemonsets verbose logs: I0513 22:34:02.391119 78104 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:02.395381 78104 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 3 milliseconds I0513 22:34:02.416100 78104 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481241-859/daemonsets?limit=500 200 OK in 1 milliseconds I0513 22:34:02.418939 78104 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481241-859/daemonsets/bind 200 OK in 1 milliseconds I0513 22:34:02.438397 78104 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481241-859/pods?labelSelector=service%3Dbind&limit=500 200 OK in 17 milliseconds I0513 22:34:02.439849 78104 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481241-859/events?fieldSelector=involvedObject.name%3Dbind%2CinvolvedObject.namespace%3Dnamespace-1652481241-859%2CinvolvedObject.kind%3DDaemonSet%2CinvolvedObject.uid%3Dbeade1a1-24e1-4de3-8df4-9cc638492239&limit=500 200 OK in 1 milliseconds (Bdaemonset.apps/bind restarted apps.sh:53: Successful get daemonsets bind {{.metadata.generation}}: 5 (Bdaemonset.apps "bind" deleted +++ exit code: 0 Recording: run_daemonset_history_tests Running command: run_daemonset_history_tests +++ Running case: test-cmd.run_daemonset_history_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_daemonset_history_tests +++ [0513 22:34:02] Creating namespace namespace-1652481242-1321 namespace/namespace-1652481242-1321 created Context "test" modified. +++ [0513 22:34:02] Testing kubectl(v1:daemonsets, v1:controllerrevisions) apps.sh:71: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: (BFlag --record has been deprecated, --record will be removed in the future daemonset.apps/bind created apps.sh:75: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1652481242-1321"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (Bdaemonset.apps/bind skipped rollback (current template already matches revision 1) apps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0: (Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BFlag --record has been deprecated, --record will be removed in the future daemonset.apps/bind configured apps.sh:82: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest: (Bapps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:84: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bapps.sh:85: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1652481242-1321"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1652481242-1321"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (Bdaemonset.apps/bind will roll back to Pod Template: Labels: service=bind Containers: kubernetes-pause: Image: k8s.gcr.io/pause:2.0 Port: Host Port: Environment: Mounts: Volumes: (dry run) daemonset.apps/bind rolled back (server dry run) apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest: (Bapps.sh:90: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:91: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bdaemonset.apps/bind rolled back apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0: (Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BSuccessful (Bmessage:error: unable to find specified revision 1000000 in history has:unable to find specified revision apps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0: (Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (Bdaemonset.apps/bind rolled back apps.sh:103: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest: (Bapps.sh:104: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:105: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bdaemonset.apps "bind" deleted +++ exit code: 0 Recording: run_rc_tests Running command: run_rc_tests +++ Running case: test-cmd.run_rc_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_rc_tests +++ [0513 22:34:05] Creating namespace namespace-1652481245-15758 namespace/namespace-1652481245-15758 created Context "test" modified. +++ [0513 22:34:05] Testing kubectl(v1:replicationcontrollers) core.sh:1205: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0513 22:34:05.531527 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-r74gm" I0513 22:34:05.559949 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gbntn" I0513 22:34:05.560203 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-hzrfh" replicationcontroller "frontend" deleted W0513 22:34:05.616722 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:34:05.616765 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource core.sh:1210: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1214: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0513 22:34:05.935328 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-fd4wr" I0513 22:34:05.941992 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-2qmh4" I0513 22:34:05.942017 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-z64tv" core.sh:1218: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bmatched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: matched GET_HOSTS_FROM: core.sh:1220: Successful describe rc frontend: Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replication-controller Created pod: frontend-fd4wr Normal SuccessfulCreate 1s replication-controller Created pod: frontend-2qmh4 Normal SuccessfulCreate 1s replication-controller Created pod: frontend-z64tv (Bcore.sh:1222: Successful describe Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replication-controller Created pod: frontend-fd4wr Normal SuccessfulCreate 1s replication-controller Created pod: frontend-2qmh4 Normal SuccessfulCreate 1s replication-controller Created pod: frontend-z64tv (B core.sh:1224: Successful describe Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (B core.sh:1226: Successful describe Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replication-controller Created pod: frontend-fd4wr Normal SuccessfulCreate 1s replication-controller Created pod: frontend-2qmh4 Normal SuccessfulCreate 1s replication-controller Created pod: frontend-z64tv (B matched Name: matched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: matched GET_HOSTS_FROM: Successful describe rc: Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replication-controller Created pod: frontend-fd4wr Normal SuccessfulCreate 1s replication-controller Created pod: frontend-2qmh4 Normal SuccessfulCreate 1s replication-controller Created pod: frontend-z64tv (BSuccessful describe Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replication-controller Created pod: frontend-fd4wr Normal SuccessfulCreate 1s replication-controller Created pod: frontend-2qmh4 Normal SuccessfulCreate 1s replication-controller Created pod: frontend-z64tv (BSuccessful describe Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (BSuccessful describe Name: frontend Namespace: namespace-1652481245-15758 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v4 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replication-controller Created pod: frontend-fd4wr Normal SuccessfulCreate 1s replication-controller Created pod: frontend-2qmh4 Normal SuccessfulCreate 1s replication-controller Created pod: frontend-z64tv (Bquery for replicationcontrollers had limit param query for events had limit param query for replicationcontrollers had user-specified limit param Successful describe replicationcontrollers verbose logs: I0513 22:34:06.678138 78940 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:06.682481 78940 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:34:06.704270 78940 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481245-15758/replicationcontrollers?limit=500 200 OK in 1 milliseconds I0513 22:34:06.706276 78940 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481245-15758/replicationcontrollers/frontend 200 OK in 1 milliseconds I0513 22:34:06.709395 78940 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481245-15758/pods?labelSelector=app%3Dguestbook%2Ctier%3Dfrontend&limit=500 200 OK in 1 milliseconds I0513 22:34:06.711543 78940 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481245-15758/events?fieldSelector=involvedObject.name%3Dfrontend%2CinvolvedObject.namespace%3Dnamespace-1652481245-15758%2CinvolvedObject.kind%3DReplicationController%2CinvolvedObject.uid%3D35674673-d50a-469a-a5a9-78fa8b1401e2&limit=500 200 OK in 1 milliseconds (Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3 (Breplicationcontroller/frontend scaled E0513 22:34:06.932506 56663 replica_set.go:224] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend namespace-1652481245-15758 35674673-d50a-469a-a5a9-78fa8b1401e2 2143 2 2022-05-13 22:34:05 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2022-05-13 22:34:05 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2022-05-13 22:34:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] [] [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{104857600 0} {} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001358f28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} I0513 22:34:06.952767 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-fd4wr" core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2 (Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2 (Berror: Expected replicas to be 3, was 2 core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2 (Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2 (Breplicationcontroller/frontend scaled I0513 22:34:07.355943 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-grdwz" core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3 (Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3 (Breplicationcontroller/frontend scaled E0513 22:34:07.553952 56663 replica_set.go:224] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend namespace-1652481245-15758 35674673-d50a-469a-a5a9-78fa8b1401e2 2154 4 2022-05-13 22:34:05 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2022-05-13 22:34:05 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2022-05-13 22:34:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] [] [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{104857600 0} {} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001c26078 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} I0513 22:34:07.583933 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-grdwz" core.sh:1268: Successful get rc frontend {{.spec.replicas}}: 2 (Breplicationcontroller "frontend" deleted replicationcontroller/redis-master created I0513 22:34:07.868505 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-ctm5w" replicationcontroller/redis-slave created I0513 22:34:08.055581 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-pchx8" I0513 22:34:08.062348 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-2n4x6" replicationcontroller/redis-master scaled replicationcontroller/redis-slave scaled I0513 22:34:08.121075 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-txkn4" I0513 22:34:08.127929 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-kvbwp" I0513 22:34:08.127957 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-6gmwf" I0513 22:34:08.127971 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-master" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-master-82kcb" I0513 22:34:08.134379 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-brccl" core.sh:1278: Successful get rc redis-master {{.spec.replicas}}: 4 (Bcore.sh:1279: Successful get rc redis-slave {{.spec.replicas}}: 4 (Breplicationcontroller "redis-master" deleted replicationcontroller "redis-slave" deleted deployment.apps/nginx-deployment created I0513 22:34:08.534905 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-58f46b58b6 to 3" I0513 22:34:08.563757 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-rjxfk" I0513 22:34:08.573734 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-xcw2p" I0513 22:34:08.573768 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-69cbp" W0513 22:34:08.587946 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:34:08.587978 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource deployment.apps/nginx-deployment scaled I0513 22:34:09.621243 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-58f46b58b6 to 1 from 3" I0513 22:34:09.641411 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-58f46b58b6-rjxfk" I0513 22:34:09.654729 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-58f46b58b6-xcw2p" core.sh:1288: Successful get deployment nginx-deployment {{.spec.replicas}}: 1 (Bdeployment.apps "nginx-deployment" deleted I0513 22:34:09.847665 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/expose-test-deployment" clusterIPs=map[IPv4:10.0.0.249] Successful (Bmessage:service/expose-test-deployment exposed has:service/expose-test-deployment exposed service "expose-test-deployment" deleted Successful (Bmessage:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed has:invalid deployment: no selectors deployment.apps/nginx-deployment created I0513 22:34:10.180499 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-58f46b58b6 to 3" I0513 22:34:10.189872 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-kcvmz" I0513 22:34:10.196232 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-lhn9r" I0513 22:34:10.196274 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-qdgxn" core.sh:1307: Successful get deployment nginx-deployment {{.spec.replicas}}: 3 (BI0513 22:34:10.314475 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/nginx-deployment" clusterIPs=map[IPv4:10.0.0.229] service/nginx-deployment exposed core.sh:1311: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80 (Bdeployment.apps "nginx-deployment" deleted service "nginx-deployment" deleted replicationcontroller/frontend created I0513 22:34:10.691102 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-6kn9f" I0513 22:34:10.698476 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-s4ndn" I0513 22:34:10.698528 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-j7jcq" core.sh:1318: Successful get rc frontend {{.spec.replicas}}: 3 (BI0513 22:34:10.843716 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/frontend" clusterIPs=map[IPv4:10.0.0.225] service/frontend exposed core.sh:1322: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (BI0513 22:34:10.992493 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/frontend-2" clusterIPs=map[IPv4:10.0.0.245] service/frontend-2 exposed core.sh:1326: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 443 (Bpod/valid-pod created I0513 22:34:11.308160 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/frontend-3" clusterIPs=map[IPv4:10.0.0.216] service/frontend-3 exposed core.sh:1331: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 444 (BI0513 22:34:11.450335 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/frontend-4" clusterIPs=map[IPv4:10.0.0.246] service/frontend-4 exposed core.sh:1335: Successful get service frontend-4 {{(index .spec.ports 0).port}}: 80 (Bpod "valid-pod" deleted service "frontend" deleted service "frontend-2" deleted service "frontend-3" deleted service "frontend-4" deleted Successful (Bmessage:error: cannot expose a Node has:cannot expose Successful (Bmessage:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters has:metadata.name: Invalid value I0513 22:34:12.033930 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs=map[IPv4:10.0.0.126] Successful (Bmessage:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted I0513 22:34:12.215412 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481245-15758/etcd-server" clusterIPs=map[IPv4:10.0.0.27] Successful (Bmessage:service/etcd-server exposed has:etcd-server exposed core.sh:1365: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380 (Bcore.sh:1366: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379 (Bservice "etcd-server" deleted core.sh:1372: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Breplicationcontroller "frontend" deleted core.sh:1376: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1380: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0513 22:34:12.938032 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-cnv7h" I0513 22:34:12.949270 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gh5vg" I0513 22:34:12.949302 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-v2pkc" replicationcontroller/redis-slave created I0513 22:34:13.113665 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-shh4p" I0513 22:34:13.123714 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/redis-slave" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-twx7v" core.sh:1385: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Bcore.sh:1389: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Breplicationcontroller "frontend" deleted replicationcontroller "redis-slave" deleted core.sh:1393: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Bcore.sh:1397: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/frontend created I0513 22:34:13.669093 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-s5dtw" I0513 22:34:13.676684 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-pqhd2" I0513 22:34:13.676722 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-94b2z" core.sh:1400: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled core.sh:1403: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70 (Bhorizontalpodautoscaler.autoscaling "frontend" deleted horizontalpodautoscaler.autoscaling/frontend autoscaled core.sh:1407: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80 (Bhorizontalpodautoscaler.autoscaling "frontend" deleted error: required flag(s) "max" not set replicationcontroller "frontend" deleted core.sh:1416: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (BapiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: name: nginx-deployment-resources name: nginx-deployment-resources spec: replicas: 3 selector: matchLabels: name: nginx strategy: {} template: metadata: creationTimestamp: null labels: name: nginx spec: containers: - image: k8s.gcr.io/nginx:test-cmd name: nginx ports: - containerPort: 80 resources: {} - image: k8s.gcr.io/perl name: perl resources: limits: cpu: 300m requests: cpu: 300m terminationGracePeriodSeconds: 0 status: {} Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found deployment.apps/nginx-deployment-resources created I0513 22:34:14.615152 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-68f45bc4ff to 3" I0513 22:34:14.622711 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-68f45bc4ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-68f45bc4ff-fg6r5" I0513 22:34:14.629238 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-68f45bc4ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-68f45bc4ff-tj74f" I0513 22:34:14.629269 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-68f45bc4ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-68f45bc4ff-zh7n5" core.sh:1422: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources: (Bcore.sh:1423: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bcore.sh:1424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Bdeployment.apps/nginx-deployment-resources resource requirements updated I0513 22:34:14.935792 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-68c4d7c875 to 1" I0513 22:34:14.946385 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-68c4d7c875" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-68c4d7c875-9p794" core.sh:1427: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m: (Bcore.sh:1428: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m: (Berror: unable to find container named redis deployment.apps/nginx-deployment-resources resource requirements updated I0513 22:34:15.253249 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-68c4d7c875 to 0 from 1" I0513 22:34:15.272236 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-8697c45f7c to 1 from 0" I0513 22:34:15.279514 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-68c4d7c875" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-68c4d7c875-9p794" I0513 22:34:15.279818 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-8697c45f7c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-8697c45f7c-s5shx" core.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m: (Bcore.sh:1434: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m: (Bdeployment.apps/nginx-deployment-resources resource requirements updated I0513 22:34:15.507453 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-68f45bc4ff to 2 from 3" I0513 22:34:15.526977 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-68f45bc4ff" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-68f45bc4ff-fg6r5" I0513 22:34:15.527833 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-6ffc5f96bd to 1 from 0" I0513 22:34:15.533817 56663 event.go:294] "Event occurred" object="namespace-1652481245-15758/nginx-deployment-resources-6ffc5f96bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-6ffc5f96bd-hq7ml" core.sh:1437: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m: (Bcore.sh:1438: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m: (Bcore.sh:1439: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m: (BapiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "4" creationTimestamp: "2022-05-13T22:34:14Z" generation: 4 labels: name: nginx-deployment-resources name: nginx-deployment-resources namespace: namespace-1652481245-15758 resourceVersion: "2469" uid: a375d79a-21da-4157-a6f3-c20f779b7d4c spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: name: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: nginx spec: containers: - image: k8s.gcr.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 200m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - image: k8s.gcr.io/perl imagePullPolicy: Always name: perl resources: limits: cpu: 400m requests: cpu: 400m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0 status: conditions: - lastTransitionTime: "2022-05-13T22:34:14Z" lastUpdateTime: "2022-05-13T22:34:14Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2022-05-13T22:34:14Z" lastUpdateTime: "2022-05-13T22:34:15Z" message: ReplicaSet "nginx-deployment-resources-6ffc5f96bd" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 4 replicas: 4 unavailableReplicas: 4 updatedReplicas: 1 apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "4" creationTimestamp: "2022-05-13T22:34:14Z" generation: 5 labels: name: nginx-deployment-resources name: nginx-deployment-resources namespace: namespace-1652481245-15758 resourceVersion: "2469" uid: a375d79a-21da-4157-a6f3-c20f779b7d4c spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: name: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: name: nginx spec: containers: - image: k8s.gcr.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: limits: cpu: 200m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - image: k8s.gcr.io/perl imagePullPolicy: Always name: perl resources: limits: cpu: 400m requests: cpu: 400m terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0 status: conditions: - lastTransitionTime: "2022-05-13T22:34:14Z" lastUpdateTime: "2022-05-13T22:34:14Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2022-05-13T22:34:14Z" lastUpdateTime: "2022-05-13T22:34:15Z" message: ReplicaSet "nginx-deployment-resources-6ffc5f96bd" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 4 replicas: 4 unavailableReplicas: 4 updatedReplicas: 1 error: you must specify resources by --filename when --local is set. Example resource specifications include: '-f rsrc.yaml' '--filename=rsrc.json' core.sh:1444: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m: (Bcore.sh:1445: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m: (Bcore.sh:1446: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m: (Bdeployment.apps "nginx-deployment-resources" deleted +++ exit code: 0 Recording: run_deployment_tests Running command: run_deployment_tests +++ Running case: test-cmd.run_deployment_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_deployment_tests +++ [0513 22:34:16] Creating namespace namespace-1652481256-20403 namespace/namespace-1652481256-20403 created Context "test" modified. +++ [0513 22:34:16] Testing deployments deployment.apps/test-nginx-extensions created I0513 22:34:16.551774 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/test-nginx-extensions" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-nginx-extensions-7b6f7dfdc5 to 1" I0513 22:34:16.565148 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/test-nginx-extensions-7b6f7dfdc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-nginx-extensions-7b6f7dfdc5-bhglm" apps.sh:191: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx (BSuccessful (Bmessage:10 has not:2 Successful (Bmessage:apps/v1 has:apps/v1 deployment.apps "test-nginx-extensions" deleted deployment.apps/test-nginx-apps created I0513 22:34:16.882104 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/test-nginx-apps" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-nginx-apps-99d6c65df to 1" I0513 22:34:16.884695 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/test-nginx-apps-99d6c65df" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-nginx-apps-99d6c65df-sz49c" apps.sh:204: Successful get deploy test-nginx-apps {{(index .spec.template.spec.containers 0).name}}: nginx (BSuccessful (Bmessage:10 has:10 Successful (Bmessage:apps/v1 has:apps/v1 matched Name: matched Pod Template: matched Labels: matched Selector: matched Controlled By matched Replicas: matched Pods Status: matched Volumes: Successful describe rs: Name: test-nginx-apps-99d6c65df Namespace: namespace-1652481256-20403 Selector: app=test-nginx-apps,pod-template-hash=99d6c65df Labels: app=test-nginx-apps pod-template-hash=99d6c65df Annotations: deployment.kubernetes.io/desired-replicas: 1 deployment.kubernetes.io/max-replicas: 2 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/test-nginx-apps Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=test-nginx-apps pod-template-hash=99d6c65df Containers: nginx: Image: k8s.gcr.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: test-nginx-apps-99d6c65df-sz49c (Bmatched Name: matched Image: matched Node: matched Labels: matched Status: matched Controlled By Successful describe pods: Name: test-nginx-apps-99d6c65df-sz49c Namespace: namespace-1652481256-20403 Priority: 0 Node: Labels: app=test-nginx-apps pod-template-hash=99d6c65df Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/test-nginx-apps-99d6c65df Containers: nginx: Image: k8s.gcr.io/nginx:test-cmd Port: Host Port: Environment: Mounts: Volumes: QoS Class: BestEffort Node-Selectors: Tolerations: Events: (Bquery for deployments had limit param query for replicasets had limit param query for events had limit param query for deployments had user-specified limit param Successful describe deployments verbose logs: I0513 22:34:17.259534 80466 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:17.265904 80466 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 6 milliseconds I0513 22:34:17.287017 80466 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481256-20403/deployments?limit=500 200 OK in 1 milliseconds I0513 22:34:17.289337 80466 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481256-20403/deployments/test-nginx-apps 200 OK in 1 milliseconds I0513 22:34:17.292246 80466 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481256-20403/events?fieldSelector=involvedObject.name%3Dtest-nginx-apps%2CinvolvedObject.namespace%3Dnamespace-1652481256-20403%2CinvolvedObject.kind%3DDeployment%2CinvolvedObject.uid%3Dbe6317b5-1986-4192-8b96-f4a606f3a030&limit=500 200 OK in 1 milliseconds I0513 22:34:17.293843 80466 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481256-20403/replicasets?labelSelector=app%3Dtest-nginx-apps&limit=500 200 OK in 1 milliseconds (Bdeployment.apps "test-nginx-apps" deleted apps.sh:222: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-with-command created (dry run) deployment.apps/nginx-with-command created (server dry run) apps.sh:226: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-with-command created I0513 22:34:17.809464 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-with-command" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-with-command-646b88b5b5 to 1" I0513 22:34:17.816155 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-with-command-646b88b5b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-with-command-646b88b5b5-676ft" apps.sh:230: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx (Bdeployment.apps "nginx-with-command" deleted apps.sh:236: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/deployment-with-unixuserid created I0513 22:34:18.195193 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/deployment-with-unixuserid" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deployment-with-unixuserid-95945856c to 1" I0513 22:34:18.202071 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/deployment-with-unixuserid-95945856c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-with-unixuserid-95945856c-q5wgw" apps.sh:240: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid: (Bdeployment.apps "deployment-with-unixuserid" deleted apps.sh:247: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0513 22:34:18.566440 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-58f46b58b6 to 3" I0513 22:34:18.577797 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-f5qp5" I0513 22:34:18.584273 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-hhfg8" I0513 22:34:18.584862 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-vsg7z" apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3 (Bdeployment.apps "nginx-deployment" deleted apps.sh:255: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:259: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:260: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0513 22:34:19.017978 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-f86677b89 to 1" I0513 22:34:19.025080 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-f86677b89" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-f86677b89-sdb5m" apps.sh:264: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1 (Bdeployment.apps "nginx-deployment" deleted apps.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:270: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1 (Breplicaset.apps "nginx-deployment-f86677b89" deleted apps.sh:278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:280: Successful get hpa {{range.items}}{{ if eq .metadata.name \"nginx-deployment\" }}found{{end}}{{end}}:: : (Bdeployment.apps/nginx-deployment created I0513 22:34:19.793527 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-58f46b58b6 to 3" I0513 22:34:19.805531 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-25rg8" I0513 22:34:19.818157 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-96v52" I0513 22:34:19.818187 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-58f46b58b6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-58f46b58b6-9947d" apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bhorizontalpodautoscaler.autoscaling/nginx-deployment created (dry run) apps.sh:287: Successful get hpa {{range.items}}{{ if eq .metadata.name \"nginx-deployment\" }}found{{end}}{{end}}:: : (Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled apps.sh:290: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80 (Bquery for horizontalpodautoscalers had limit param query for events had limit param query for horizontalpodautoscalers had user-specified limit param Successful describe horizontalpodautoscalers verbose logs: I0513 22:34:20.380992 80969 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:20.385529 80969 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:34:20.405788 80969 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2/namespaces/namespace-1652481256-20403/horizontalpodautoscalers?limit=500 200 OK in 1 milliseconds I0513 22:34:20.408026 80969 round_trippers.go:553] GET https://127.0.0.1:6443/apis/autoscaling/v2beta2/namespaces/namespace-1652481256-20403/horizontalpodautoscalers/nginx-deployment 200 OK in 1 milliseconds Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler I0513 22:34:20.410645 80969 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481256-20403/events?fieldSelector=involvedObject.namespace%3Dnamespace-1652481256-20403%2CinvolvedObject.kind%3DHorizontalPodAutoscaler%2CinvolvedObject.uid%3D8352b1ac-e410-4986-8f7a-1e5f3a2919f7%2CinvolvedObject.name%3Dnginx-deployment&limit=500 200 OK in 2 milliseconds (Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted deployment.apps "nginx-deployment" deleted apps.sh:300: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx created I0513 22:34:20.868046 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-585d4bd5c9 to 3" I0513 22:34:20.875604 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-585d4bd5c9-wlg25" I0513 22:34:20.881863 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-585d4bd5c9-6dfnm" I0513 22:34:20.881887 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-585d4bd5c9-8jf6r" apps.sh:304: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx: (Bapps.sh:305: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bdeployment.apps/nginx skipped rollback (current template already matches revision 1) apps.sh:308: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (BWarning: resource deployments/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. deployment.apps/nginx configured I0513 22:34:21.345122 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-56b869c64c to 1" I0513 22:34:21.352416 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-56b869c64c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-56b869c64c-sj77c" apps.sh:311: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9: (B Image: k8s.gcr.io/nginx:test-cmd deployment.apps/nginx rolled back (server dry run) apps.sh:315: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9: (Bdeployment.apps/nginx rolled back apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Berror: unable to find specified revision 1000000 in history apps.sh:322: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bdeployment.apps/nginx rolled back apps.sh:326: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9: (Bdeployment.apps/nginx paused error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first) deployment.apps/nginx resumed deployment.apps/nginx rolled back deployment.kubernetes.io/revision-history: 1,3 error: desired revision (3) is different from the running revision (5) deployment.apps/nginx restarted I0513 22:34:24.875507 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-585d4bd5c9 to 2 from 3" I0513 22:34:24.894976 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5676fc46d4 to 1 from 0" I0513 22:34:24.901900 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-585d4bd5c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-585d4bd5c9-6dfnm" I0513 22:34:24.902496 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-5676fc46d4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5676fc46d4-j4lt2" Successful (Bmessage:apiVersion: apps/v1 kind: ReplicaSet metadata: annotations: deployment.kubernetes.io/desired-replicas: "3" deployment.kubernetes.io/max-replicas: "4" deployment.kubernetes.io/revision: "6" creationTimestamp: "2022-05-13T22:34:24Z" generation: 2 labels: name: nginx-undo pod-template-hash: 5676fc46d4 name: nginx-5676fc46d4 namespace: namespace-1652481256-20403 ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: Deployment name: nginx uid: 6e1c0092-8488-472b-9f76-a601b74d2f87 resourceVersion: "2671" uid: 3e4e4ab2-42cd-4789-b4f8-2096c63e9210 spec: replicas: 1 selector: matchLabels: name: nginx-undo pod-template-hash: 5676fc46d4 template: metadata: annotations: kubectl.kubernetes.io/restartedAt: "2022-05-13T22:34:24Z" creationTimestamp: null labels: name: nginx-undo pod-template-hash: 5676fc46d4 spec: containers: - image: k8s.gcr.io/nginx:test-cmd imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: fullyLabeledReplicas: 1 observedGeneration: 2 replicas: 1 has:deployment.kubernetes.io/revision: "6" Successful (Bmessage:kubectl-create kubectl kubectl-client-side-apply kube-controller-manager kubectl-rollout has:kubectl-rollout deployment.apps/nginx2 created I0513 22:34:26.188811 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx2-846b74f569 to 3" I0513 22:34:26.197129 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx2-846b74f569" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-846b74f569-85rns" I0513 22:34:26.204248 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx2-846b74f569" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-846b74f569-spjcq" I0513 22:34:26.204299 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx2-846b74f569" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-846b74f569-chqdk" deployment.apps "nginx2" deleted deployment.apps "nginx" deleted apps.sh:360: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0513 22:34:26.606081 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5bd846d78 to 3" I0513 22:34:26.614399 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5bd846d78-g557b" I0513 22:34:26.646636 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5bd846d78-wqhpg" I0513 22:34:26.646679 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5bd846d78-cjw99" apps.sh:363: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Bdeployment.apps/nginx-deployment image updated (dry run) deployment.apps/nginx-deployment image updated (server dry run) apps.sh:369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Bdeployment.apps/nginx-deployment image updated I0513 22:34:27.305258 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-68945b8988 to 1" I0513 22:34:27.316569 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-68945b8988" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-68945b8988-fxnvb" apps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9: (Bapps.sh:374: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Berror: unable to find container named "redis" deployment.apps/nginx-deployment image updated apps.sh:379: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:380: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Bdeployment.apps/nginx-deployment image updated apps.sh:383: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9: (Bapps.sh:384: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Bapps.sh:387: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9: (Bapps.sh:388: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl: (Bdeployment.apps/nginx-deployment image updated I0513 22:34:28.309853 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-5bd846d78 to 2 from 3" I0513 22:34:28.329637 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-c49ddbbf6 to 1 from 0" I0513 22:34:28.336309 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-5bd846d78-g557b" I0513 22:34:28.338025 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-c49ddbbf6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-c49ddbbf6-bdfhq" apps.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bapps.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd: (Bdeployment.apps "nginx-deployment" deleted I0513 22:34:28.807842 56663 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1652481245-15758 apps.sh:402: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: (Bdeployment.apps/nginx-deployment created I0513 22:34:29.011701 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5bd846d78 to 3" I0513 22:34:29.019624 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5bd846d78-trnr5" I0513 22:34:29.025116 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5bd846d78-5zklf" I0513 22:34:29.025723 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5bd846d78-khvp5" configmap/test-set-env-config created secret/test-set-env-secret created apps.sh:407: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment: (Bapps.sh:409: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config (Bapps.sh:410: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret: (Bwarning: key key-2 transferred to KEY_2 deployment.apps/nginx-deployment env updated I0513 22:34:29.647435 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6bd96bcbb to 1" I0513 22:34:29.655683 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-6bd96bcbb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6bd96bcbb-4x8sn" apps.sh:414: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2 (Bapps.sh:416: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1 (Bwarning: key key-1 transferred to KEY_1 warning: key key-2 transferred to KEY_2 deployment.apps/nginx-deployment env updated (dry run) warning: key key-2 transferred to KEY_2 warning: key key-1 transferred to KEY_1 deployment.apps/nginx-deployment env updated (server dry run) apps.sh:420: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1 (Bwarning: key key-1 transferred to KEY_1 warning: key key-2 transferred to KEY_2 deployment.apps/nginx-deployment env updated I0513 22:34:30.200139 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-5bd846d78 to 2 from 3" I0513 22:34:30.220712 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-66b4bfccb to 1 from 0" I0513 22:34:30.226563 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-5bd846d78-trnr5" I0513 22:34:30.226594 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-66b4bfccb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-66b4bfccb-lh5q5" apps.sh:424: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2 (Bdeployment.apps/nginx-deployment env updated I0513 22:34:30.386401 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-5bd846d78 to 1 from 2" I0513 22:34:30.408024 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-c64486fc8 to 1 from 0" I0513 22:34:30.420929 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-5bd846d78-khvp5" I0513 22:34:30.423201 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-c64486fc8" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-c64486fc8-cxt2p" deployment.apps/nginx-deployment env updated I0513 22:34:30.474385 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-5bd846d78 to 0 from 1" E0513 22:34:30.507102 56663 replica_set.go:550] sync "namespace-1652481256-20403/nginx-deployment-5bd846d78" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5bd846d78": the object has been modified; please apply your changes to the latest version and try again I0513 22:34:30.507404 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-758c8f797 to 1 from 0" warning: key username transferred to USERNAME deployment.apps/nginx-deployment env updated I0513 22:34:30.555969 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-6bd96bcbb to 0 from 1" I0513 22:34:30.562017 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-5bd846d78" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-5bd846d78-5zklf" I0513 22:34:30.582840 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6bd6f7b849 to 1 from 0" I0513 22:34:30.592456 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-758c8f797" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-758c8f797-mdztq" warning: key password transferred to PASSWORD warning: key username transferred to USERNAME deployment.apps/nginx-deployment env updated I0513 22:34:30.697578 56663 event.go:294] "Event occurred" object="namespace-1652481256-20403/nginx-deployment-6bd96bcbb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-6bd96bcbb-4x8sn" deployment.apps/nginx-deployment env updated Successful (Bmessage:error: standard input cannot be used for multiple arguments has:standard input cannot be used for multiple arguments deployment.apps "nginx-deployment" deleted E0513 22:34:30.905459 56663 replica_set.go:550] sync "namespace-1652481256-20403/nginx-deployment-6bd6f7b849" failed with replicasets.apps "nginx-deployment-6bd6f7b849" not found configmap "test-set-env-config" deleted secret "test-set-env-secret" deleted +++ exit code: 0 Recording: run_rs_tests Running command: run_rs_tests +++ Running case: test-cmd.run_rs_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_rs_tests +++ [0513 22:34:31] Creating namespace namespace-1652481271-16016 namespace/namespace-1652481271-16016 created Context "test" modified. E0513 22:34:31.188585 56663 replica_set.go:550] sync "namespace-1652481256-20403/nginx-deployment-6bd96bcbb" failed with replicasets.apps "nginx-deployment-6bd96bcbb" not found +++ [0513 22:34:31] Testing kubectl(v1:replicasets) E0513 22:34:31.238614 56663 replica_set.go:550] sync "namespace-1652481256-20403/nginx-deployment-5bd846d78" failed with replicasets.apps "nginx-deployment-5bd846d78" not found apps.sh:553: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (BE0513 22:34:31.288885 56663 replica_set.go:550] sync "namespace-1652481256-20403/nginx-deployment-6c546bbbdc" failed with replicasets.apps "nginx-deployment-6c546bbbdc" not found E0513 22:34:31.389529 56663 replica_set.go:550] sync "namespace-1652481256-20403/nginx-deployment-758c8f797" failed with replicasets.apps "nginx-deployment-758c8f797" not found replicaset.apps/frontend created +++ [0513 22:34:31] Deleting rs I0513 22:34:31.442035 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-c6hsm" replicaset.apps "frontend" deleted I0513 22:34:31.547128 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-rtr5k" I0513 22:34:31.591330 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-r9xf4" Waiting for Get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}} : expected: , got: frontend-r9xf4: apps.sh:559: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:563: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (BE0513 22:34:31.789406 56663 replica_set.go:550] sync "namespace-1652481271-16016/frontend" failed with replicasets.apps "frontend" not found replicaset.apps/frontend created I0513 22:34:31.906033 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-9qbgs" I0513 22:34:31.940260 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-g8cjl" Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: php-redis:php-redis: I0513 22:34:32.000721 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-22kww" apps.sh:567: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis: (B+++ [0513 22:34:32] Deleting rs replicaset.apps "frontend" deleted apps.sh:571: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (BE0513 22:34:32.240831 56663 replica_set.go:550] sync "namespace-1652481271-16016/frontend" failed with replicasets.apps "frontend" not found apps.sh:573: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis: (Bpod "frontend-22kww" deleted pod "frontend-9qbgs" deleted pod "frontend-g8cjl" deleted apps.sh:576: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:580: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created I0513 22:34:32.712860 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-m25wx" I0513 22:34:32.720042 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-tjvvk" I0513 22:34:32.720083 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-55sjm" apps.sh:584: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bmatched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: apps.sh:586: Successful describe rs frontend: Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-m25wx Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-tjvvk Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-55sjm (Bapps.sh:588: Successful describe Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-m25wx Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-tjvvk Normal SuccessfulCreate 0s replicaset-controller Created pod: frontend-55sjm (B apps.sh:590: Successful describe Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (B apps.sh:592: Successful describe Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-m25wx Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-tjvvk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-55sjm (B matched Name: matched Pod Template: matched Labels: matched Selector: matched Replicas: matched Pods Status: matched Volumes: Successful describe rs: Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-m25wx Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-tjvvk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-55sjm (BSuccessful describe Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-m25wx Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-tjvvk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-55sjm (BSuccessful describe Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: (BSuccessful describe Name: frontend Namespace: namespace-1652481271-16016 Selector: app=guestbook,tier=frontend Labels: app=guestbook tier=frontend Annotations: Replicas: 3 current / 3 desired Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=guestbook tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-m25wx Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-tjvvk Normal SuccessfulCreate 1s replicaset-controller Created pod: frontend-55sjm (Bmatched Name: matched Image: matched Node: matched Labels: matched Status: matched Controlled By Successful describe pods: Name: frontend-55sjm Namespace: namespace-1652481271-16016 Priority: 0 Node: Labels: app=guestbook tier=frontend Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: QoS Class: Burstable Node-Selectors: Tolerations: Events: Name: frontend-m25wx Namespace: namespace-1652481271-16016 Priority: 0 Node: Labels: app=guestbook tier=frontend Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: QoS Class: Burstable Node-Selectors: Tolerations: Events: Name: frontend-tjvvk Namespace: namespace-1652481271-16016 Priority: 0 Node: Labels: app=guestbook tier=frontend Annotations: Status: Pending IP: IPs: Controlled By: ReplicaSet/frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Port: 80/TCP Host Port: 0/TCP Requests: cpu: 100m memory: 100Mi Environment: GET_HOSTS_FROM: dns Mounts: Volumes: QoS Class: Burstable Node-Selectors: Tolerations: Events: (Bquery for replicasets had limit param query for pods had limit param query for events had limit param query for replicasets had user-specified limit param Successful describe replicasets verbose logs: I0513 22:34:33.546007 82585 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:33.550675 82585 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:34:33.572705 82585 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481271-16016/replicasets?limit=500 200 OK in 1 milliseconds I0513 22:34:33.574940 82585 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481271-16016/replicasets/frontend 200 OK in 1 milliseconds I0513 22:34:33.578143 82585 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481271-16016/pods?labelSelector=app%3Dguestbook%2Ctier%3Dfrontend&limit=500 200 OK in 1 milliseconds I0513 22:34:33.580673 82585 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481271-16016/events?fieldSelector=involvedObject.name%3Dfrontend%2CinvolvedObject.namespace%3Dnamespace-1652481271-16016%2CinvolvedObject.kind%3DReplicaSet%2CinvolvedObject.uid%3Dc75e36ee-b01f-4a73-ad53-eb83ab3e5467&limit=500 200 OK in 1 milliseconds (Bapps.sh:608: Successful get rs frontend {{.spec.replicas}}: 3 (Breplicaset.apps/frontend scaled replicaset.apps/frontend scaled apps.sh:612: Successful get rs frontend {{.spec.replicas}}: 3 (Breplicaset.apps/frontend scaled E0513 22:34:34.013077 56663 replica_set.go:224] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend namespace-1652481271-16016 c75e36ee-b01f-4a73-ad53-eb83ab3e5467 2936 2 2022-05-13 22:34:32 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update apps/v1 2022-05-13 22:34:32 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update apps/v1 2022-05-13 22:34:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v3 [] [] [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{104857600 0} {} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000719a68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} I0513 22:34:34.026145 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-m25wx" apps.sh:616: Successful get rs frontend {{.spec.replicas}}: 2 (Bdeployment.apps/scale-1 created I0513 22:34:34.248798 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-1-7cffd6bf6c to 1" I0513 22:34:34.256425 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-1-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-1-7cffd6bf6c-mb95x" deployment.apps/scale-2 created I0513 22:34:34.407404 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-2-7cffd6bf6c to 1" I0513 22:34:34.414149 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-2-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-2-7cffd6bf6c-f89wh" deployment.apps/scale-3 created I0513 22:34:34.570697 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-3" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-3-7cffd6bf6c to 1" I0513 22:34:34.578075 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-3-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-7cffd6bf6c-9rzrv" apps.sh:622: Successful get deploy scale-1 {{.spec.replicas}}: 1 (Bapps.sh:623: Successful get deploy scale-2 {{.spec.replicas}}: 1 (Bapps.sh:624: Successful get deploy scale-3 {{.spec.replicas}}: 1 (Bdeployment.apps/scale-1 scaled deployment.apps/scale-2 scaled deployment.apps/scale-3 scaled deployment.apps/scale-1 scaled deployment.apps/scale-2 scaled deployment.apps/scale-3 scaled apps.sh:628: Successful get deploy scale-1 {{.spec.replicas}}: 1 (Bapps.sh:629: Successful get deploy scale-2 {{.spec.replicas}}: 1 (Bapps.sh:630: Successful get deploy scale-3 {{.spec.replicas}}: 1 (Bdeployment.apps/scale-1 scaled deployment.apps/scale-2 scaled I0513 22:34:35.191820 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-1-7cffd6bf6c to 2 from 1" I0513 22:34:35.198849 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-2-7cffd6bf6c to 2 from 1" I0513 22:34:35.199220 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-1-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-1-7cffd6bf6c-xxkpw" I0513 22:34:35.205293 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-2-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-2-7cffd6bf6c-59rp5" I0513 22:34:35.261262 56663 horizontal.go:360] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1652481256-20403 apps.sh:633: Successful get deploy scale-1 {{.spec.replicas}}: 2 (Bapps.sh:634: Successful get deploy scale-2 {{.spec.replicas}}: 2 (Bapps.sh:635: Successful get deploy scale-3 {{.spec.replicas}}: 1 (Bdeployment.apps/scale-1 scaled deployment.apps/scale-2 scaled I0513 22:34:35.499050 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-1-7cffd6bf6c to 3 from 2" deployment.apps/scale-3 scaled I0513 22:34:35.507593 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-1-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-1-7cffd6bf6c-f2792" I0513 22:34:35.507635 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-2-7cffd6bf6c to 3 from 2" I0513 22:34:35.515665 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-2-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-2-7cffd6bf6c-ksq9g" I0513 22:34:35.515698 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-3" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set scale-3-7cffd6bf6c to 3 from 1" I0513 22:34:35.521309 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-3-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-7cffd6bf6c-rwmnf" I0513 22:34:35.527898 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/scale-3-7cffd6bf6c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: scale-3-7cffd6bf6c-2t84x" apps.sh:638: Successful get deploy scale-1 {{.spec.replicas}}: 3 (Bapps.sh:639: Successful get deploy scale-2 {{.spec.replicas}}: 3 (Bapps.sh:640: Successful get deploy scale-3 {{.spec.replicas}}: 3 (Breplicaset.apps "frontend" deleted deployment.apps "scale-1" deleted deployment.apps "scale-2" deleted deployment.apps "scale-3" deleted replicaset.apps/frontend created I0513 22:34:36.112827 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-sxdr5" I0513 22:34:36.119449 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-qfj2t" I0513 22:34:36.119645 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-6pp59" apps.sh:648: Successful get rs frontend {{.spec.replicas}}: 3 (BI0513 22:34:36.263843 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481271-16016/frontend" clusterIPs=map[IPv4:10.0.0.39] service/frontend exposed apps.sh:652: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: 80 (Bservice "frontend" deleted apps.sh:658: Successful get rs frontend {{.metadata.generation}}: 1 (Breplicaset.apps/frontend image updated apps.sh:660: Successful get rs frontend {{.metadata.generation}}: 2 (Breplicaset.apps/frontend env updated apps.sh:662: Successful get rs frontend {{.metadata.generation}}: 3 (Breplicaset.apps/frontend resource requirements updated (dry run) replicaset.apps/frontend resource requirements updated (server dry run) apps.sh:665: Successful get rs frontend {{.metadata.generation}}: 3 (Breplicaset.apps/frontend resource requirements updated apps.sh:667: Successful get rs frontend {{.metadata.generation}}: 4 (Breplicaset.apps/frontend serviceaccount updated (dry run) replicaset.apps/frontend serviceaccount updated (server dry run) apps.sh:670: Successful get rs frontend {{.metadata.generation}}: 4 (Breplicaset.apps/frontend serviceaccount updated apps.sh:672: Successful get rs frontend {{.metadata.generation}}: 5 (BSuccessful (Bmessage:kube-controller-manager kubectl-create kubectl-set has:kubectl-set apps.sh:680: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Breplicaset.apps "frontend" deleted apps.sh:684: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:688: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created I0513 22:34:38.224058 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gw7vh" I0513 22:34:38.232207 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-b9qmj" I0513 22:34:38.232841 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-hnjzm" replicaset.apps/redis-slave created I0513 22:34:38.400515 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/redis-slave" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-6fk4h" I0513 22:34:38.406334 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/redis-slave" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-4s94k" apps.sh:693: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Bapps.sh:697: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave: (Breplicaset.apps "frontend" deleted replicaset.apps "redis-slave" deleted apps.sh:701: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Bapps.sh:706: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: (Breplicaset.apps/frontend created I0513 22:34:38.972317 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-p2qjc" I0513 22:34:38.979251 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-qn5lr" I0513 22:34:38.979312 56663 event.go:294] "Event occurred" object="namespace-1652481271-16016/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-v6cfw" apps.sh:709: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend: (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled apps.sh:712: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70 (Bhorizontalpodautoscaler.autoscaling "frontend" deleted horizontalpodautoscaler.autoscaling/frontend autoscaled apps.sh:716: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80 (BSuccessful (Bmessage:kubectl-autoscale has:kubectl-autoscale horizontalpodautoscaler.autoscaling "frontend" deleted error: required flag(s) "max" not set replicaset.apps "frontend" deleted +++ exit code: 0 Recording: run_stateful_set_tests Running command: run_stateful_set_tests +++ Running case: test-cmd.run_stateful_set_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_stateful_set_tests +++ [0513 22:34:39] Creating namespace namespace-1652481279-1161 namespace/namespace-1652481279-1161 created Context "test" modified. +++ [0513 22:34:39] Testing kubectl(v1:statefulsets) apps.sh:509: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:39.988613 53075 controller.go:611] quota admission added evaluator for: statefulsets.apps statefulset.apps/nginx created query for statefulsets had limit param query for pods had limit param query for events had limit param query for statefulsets had user-specified limit param Successful describe statefulsets verbose logs: I0513 22:34:40.032471 83644 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:40.037097 83644 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:34:40.057255 83644 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481279-1161/statefulsets?limit=500 200 OK in 1 milliseconds I0513 22:34:40.059450 83644 round_trippers.go:553] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1652481279-1161/statefulsets/nginx 200 OK in 1 milliseconds I0513 22:34:40.062413 83644 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481279-1161/pods?labelSelector=app%3Dnginx-statefulset&limit=500 200 OK in 1 milliseconds I0513 22:34:40.064069 83644 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481279-1161/events?fieldSelector=involvedObject.name%3Dnginx%2CinvolvedObject.namespace%3Dnamespace-1652481279-1161%2CinvolvedObject.kind%3DStatefulSet%2CinvolvedObject.uid%3D8a62704a-4276-4150-81b8-8fe6749c31fa&limit=500 200 OK in 1 milliseconds (Bapps.sh:518: Successful get statefulset nginx {{.spec.replicas}}: 0 (Bapps.sh:519: Successful get statefulset nginx {{.status.observedGeneration}}: 1 (Bstatefulset.apps/nginx scaled I0513 22:34:40.355838 56663 event.go:294] "Event occurred" object="namespace-1652481279-1161/nginx" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod nginx-0 in StatefulSet nginx successful" apps.sh:523: Successful get statefulset nginx {{.spec.replicas}}: 1 (Bapps.sh:524: Successful get statefulset nginx {{.status.observedGeneration}}: 2 (Bstatefulset.apps/nginx restarted apps.sh:532: Successful get statefulset nginx {{.status.observedGeneration}}: 3 (Bstatefulset.apps "nginx" deleted I0513 22:34:40.782670 56663 stateful_set.go:443] StatefulSet has been deleted namespace-1652481279-1161/nginx +++ exit code: 0 Recording: run_statefulset_history_tests Running command: run_statefulset_history_tests +++ Running case: test-cmd.run_statefulset_history_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_statefulset_history_tests +++ [0513 22:34:40] Creating namespace namespace-1652481280-24781 namespace/namespace-1652481280-24781 created Context "test" modified. +++ [0513 22:34:41] Testing kubectl(v1:statefulsets, v1:controllerrevisions) apps.sh:456: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: (BFlag --record has been deprecated, --record will be removed in the future statefulset.apps/nginx created apps.sh:460: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1652481280-24781"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (Bstatefulset.apps/nginx skipped rollback (current template already matches revision 1) apps.sh:463: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7: (Bapps.sh:464: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BFlag --record has been deprecated, --record will be removed in the future statefulset.apps/nginx configured apps.sh:467: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8: (Bapps.sh:468: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0: (Bapps.sh:469: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bapps.sh:470: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1652481280-24781"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1652481280-24781"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.8","name":"nginx","ports":[{"containerPort":80,"name":"web"}]},{"image":"k8s.gcr.io/pause:2.0","name":"pause","ports":[{"containerPort":81,"name":"web-2"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}} kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]: (Bstatefulset.apps/nginx will roll back to Pod Template: Labels: app=nginx-statefulset Containers: nginx: Image: k8s.gcr.io/nginx-slim:0.7 Port: 80/TCP Host Port: 0/TCP Command: sh -c while true; do sleep 1; done Environment: Mounts: Volumes: (dry run) statefulset.apps/nginx rolled back (server dry run) apps.sh:474: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8: (Bapps.sh:475: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0: (Bapps.sh:476: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bstatefulset.apps/nginx rolled back apps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7: (Bapps.sh:480: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (BSuccessful (Bmessage:error: unable to find specified revision 1000000 in history has:unable to find specified revision apps.sh:484: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7: (Bapps.sh:485: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1 (Bstatefulset.apps/nginx rolled back apps.sh:488: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8: (Bapps.sh:489: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0: (Bapps.sh:490: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2 (Bstatefulset.apps "nginx" deleted I0513 22:34:43.244802 56663 stateful_set.go:443] StatefulSet has been deleted namespace-1652481280-24781/nginx +++ exit code: 0 Recording: run_lists_tests Running command: run_lists_tests +++ Running case: test-cmd.run_lists_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_lists_tests +++ [0513 22:34:43] Creating namespace namespace-1652481283-20331 namespace/namespace-1652481283-20331 created Context "test" modified. +++ [0513 22:34:43] Testing kubectl(v1:lists) I0513 22:34:43.625627 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-20331/list-service-test" clusterIPs=map[IPv4:10.0.0.93] service/list-service-test created deployment.apps/list-deployment-test created I0513 22:34:43.640183 56663 event.go:294] "Event occurred" object="namespace-1652481283-20331/list-deployment-test" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set list-deployment-test-6bf48574cd to 1" I0513 22:34:43.653958 56663 event.go:294] "Event occurred" object="namespace-1652481283-20331/list-deployment-test-6bf48574cd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: list-deployment-test-6bf48574cd-pcwzq" service "list-service-test" deleted deployment.apps "list-deployment-test" deleted +++ exit code: 0 Recording: run_multi_resources_tests Running command: run_multi_resources_tests +++ Running case: test-cmd.run_multi_resources_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_multi_resources_tests +++ [0513 22:34:43] Creating namespace namespace-1652481283-1280 namespace/namespace-1652481283-1280 created Context "test" modified. +++ [0513 22:34:43] Testing kubectl(v1:multiple resources) Testing with file hack/testdata/multi-resource-yaml.yaml and replace with file hack/testdata/multi-resource-yaml-modify.yaml generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:44.181861 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.221] service/mock created replicationcontroller/mock created I0513 22:34:44.198171 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-924jq" generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mock ClusterIP 10.0.0.221 99/TCP 0s NAME DESIRED CURRENT READY AGE replicationcontroller/mock 1 1 0 0s Name: mock Namespace: namespace-1652481283-1280 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.221 IPs: 10.0.0.221 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock Namespace: namespace-1652481283-1280 Selector: app=mock Labels: app=mock Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: k8s.gcr.io/pause:3.7 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-924jq service "mock" deleted replicationcontroller "mock" deleted I0513 22:34:44.781072 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.254] service/mock replaced replicationcontroller/mock replaced I0513 22:34:44.794750 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-kdnqv" generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bservice/mock edited replicationcontroller/mock edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bservice/mock labeled replicationcontroller/mock labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bservice/mock annotated replicationcontroller/mock annotated generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted replicationcontroller "mock" deleted Testing with file hack/testdata/multi-resource-list.json and replace with file hack/testdata/multi-resource-list-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:46.223821 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.8] service/mock created replicationcontroller/mock created I0513 22:34:46.238209 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-75twj" generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mock ClusterIP 10.0.0.8 99/TCP 0s NAME DESIRED CURRENT READY AGE replicationcontroller/mock 1 1 0 0s Name: mock Namespace: namespace-1652481283-1280 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.8 IPs: 10.0.0.8 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock Namespace: namespace-1652481283-1280 Selector: app=mock Labels: app=mock Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: k8s.gcr.io/pause:3.7 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-75twj service "mock" deleted replicationcontroller "mock" deleted I0513 22:34:46.808649 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.54] service/mock replaced replicationcontroller/mock replaced I0513 22:34:46.823065 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-2bb6s" generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bservice/mock edited replicationcontroller/mock edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bservice/mock labeled replicationcontroller/mock labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bservice/mock annotated replicationcontroller/mock annotated generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted replicationcontroller "mock" deleted Testing with file hack/testdata/multi-resource-json.json and replace with file hack/testdata/multi-resource-json-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:48.128017 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.182] service/mock created replicationcontroller/mock created I0513 22:34:48.167761 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-4rs8g" generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mock ClusterIP 10.0.0.182 99/TCP 0s NAME DESIRED CURRENT READY AGE replicationcontroller/mock 1 1 0 0s Name: mock Namespace: namespace-1652481283-1280 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.182 IPs: 10.0.0.182 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock Namespace: namespace-1652481283-1280 Selector: app=mock Labels: app=mock Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: k8s.gcr.io/pause:3.7 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-4rs8g service "mock" deleted replicationcontroller "mock" deleted I0513 22:34:48.774282 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.100] service/mock replaced replicationcontroller/mock replaced I0513 22:34:48.793118 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-2pnxc" generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bservice/mock edited replicationcontroller/mock edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bservice/mock labeled replicationcontroller/mock labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bservice/mock annotated replicationcontroller/mock annotated generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted replicationcontroller "mock" deleted Testing with file hack/testdata/multi-resource-rclist.json and replace with file hack/testdata/multi-resource-rclist-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (Breplicationcontroller/mock created replicationcontroller/mock2 created I0513 22:34:50.123460 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-52s5w" I0513 22:34:50.130046 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock2" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock2-57fnx" generic-resources.sh:78: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2: (BNAME DESIRED CURRENT READY AGE mock 1 1 0 0s mock2 1 1 0 0s Name: mock Namespace: namespace-1652481283-1280 Selector: app=mock Labels: app=mock status=replaced Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock Containers: mock-container: Image: k8s.gcr.io/pause:3.7 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock-52s5w Name: mock2 Namespace: namespace-1652481283-1280 Selector: app=mock2 Labels: app=mock2 status=replaced Annotations: Replicas: 1 current / 1 desired Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=mock2 Containers: mock-container: Image: k8s.gcr.io/pause:3.7 Port: 9949/TCP Host Port: 0/TCP Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 0s replication-controller Created pod: mock2-57fnx replicationcontroller "mock" deleted replicationcontroller "mock2" deleted replicationcontroller/mock replaced replicationcontroller/mock2 replaced I0513 22:34:50.608395 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-twrc2" I0513 22:34:50.615593 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock2" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock2-jrkmk" generic-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:104: Successful get rc mock2 {{.metadata.labels.status}}: replaced (Breplicationcontroller/mock edited replicationcontroller/mock2 edited generic-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited (Bgeneric-resources.sh:122: Successful get rc mock2 {{.metadata.labels.status}}: edited (Breplicationcontroller/mock labeled replicationcontroller/mock2 labeled generic-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:142: Successful get rc mock2 {{.metadata.labels.labeled}}: true (Breplicationcontroller/mock annotated replicationcontroller/mock2 annotated generic-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:161: Successful get rc mock2 {{.metadata.annotations.annotated}}: true (Breplicationcontroller "mock" deleted replicationcontroller "mock2" deleted Testing with file hack/testdata/multi-resource-svclist.json and replace with file hack/testdata/multi-resource-svclist-modify.json generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:51.991591 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.107] service/mock created I0513 22:34:52.012171 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock2" clusterIPs=map[IPv4:10.0.0.97] service/mock2 created generic-resources.sh:70: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2: (BNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mock ClusterIP 10.0.0.107 99/TCP 1s mock2 ClusterIP 10.0.0.97 99/TCP 0s Name: mock Namespace: namespace-1652481283-1280 Labels: app=mock Annotations: Selector: app=mock Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.107 IPs: 10.0.0.107 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: Name: mock2 Namespace: namespace-1652481283-1280 Labels: app=mock2 Annotations: Selector: app=mock2 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.0.0.97 IPs: 10.0.0.97 Port: 99/TCP TargetPort: 9949/TCP Endpoints: Session Affinity: None Events: service "mock" deleted service "mock2" deleted I0513 22:34:52.523715 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.103] service/mock replaced I0513 22:34:52.543776 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock2" clusterIPs=map[IPv4:10.0.0.142] service/mock2 replaced generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced (Bgeneric-resources.sh:98: Successful get services mock2 {{.metadata.labels.status}}: replaced (Bservice/mock edited service/mock2 edited generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited (BW0513 22:34:52.998096 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:34:52.998123 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource generic-resources.sh:116: Successful get services mock2 {{.metadata.labels.status}}: edited (Bservice/mock labeled service/mock2 labeled generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true (Bgeneric-resources.sh:136: Successful get services mock2 {{.metadata.labels.labeled}}: true (Bservice/mock annotated service/mock2 annotated generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true (Bgeneric-resources.sh:155: Successful get services mock2 {{.metadata.annotations.annotated}}: true (Bservice "mock" deleted service "mock2" deleted generic-resources.sh:173: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:174: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (BI0513 22:34:54.045243 53075 alloc.go:327] "allocated clusterIPs" service="namespace-1652481283-1280/mock" clusterIPs=map[IPv4:10.0.0.204] service/mock created replicationcontroller/mock created I0513 22:34:54.061451 56663 event.go:294] "Event occurred" object="namespace-1652481283-1280/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-scqrd" I0513 22:34:54.103374 56663 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1652481271-16016 generic-resources.sh:180: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bgeneric-resources.sh:181: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock: (Bservice "mock" deleted replicationcontroller "mock" deleted generic-resources.sh:187: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: (Bgeneric-resources.sh:188: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_persistent_volumes_tests Running command: run_persistent_volumes_tests +++ Running case: test-cmd.run_persistent_volumes_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_persistent_volumes_tests +++ [0513 22:34:54] Creating namespace namespace-1652481294-2989 namespace/namespace-1652481294-2989 created Context "test" modified. +++ [0513 22:34:54] Testing persistent volumes storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolume/pv0001 created E0513 22:34:54.877415 56663 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001: (Bpersistentvolume "pv0001" deleted persistentvolume/pv0002 created storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002: (Bpersistentvolume "pv0002" deleted persistentvolume/pv0003 created E0513 22:34:55.505327 56663 pv_protection_controller.go:114] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003: (Bquery for persistentvolumes had limit param query for events had limit param query for persistentvolumes had user-specified limit param Successful describe persistentvolumes verbose logs: I0513 22:34:55.622993 86235 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:55.629214 86235 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0513 22:34:55.652791 86235 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 1 milliseconds I0513 22:34:55.654983 86235 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds I0513 22:34:55.664672 86235 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dpv0003%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3D74fe8a09-6523-4658-b0b8-2fd6341f6f4d&limit=500 200 OK in 8 milliseconds (BW0513 22:34:55.770164 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:34:55.770207 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource persistentvolume "pv0003" deleted storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolume/pv0001 created E0513 22:34:56.088880 56663 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again E0513 22:34:56.115812 56663 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001: (BSuccessful (Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace persistentvolume "pv0001" deleted has:warning: deleting cluster-scoped resources Successful (Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace persistentvolume "pv0001" deleted has:persistentvolume "pv0001" deleted storage.sh:51: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_persistent_volume_claims_tests Running command: run_persistent_volume_claims_tests +++ Running case: test-cmd.run_persistent_volume_claims_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_persistent_volume_claims_tests +++ [0513 22:34:56] Creating namespace namespace-1652481296-2495 namespace/namespace-1652481296-2495 created Context "test" modified. +++ [0513 22:34:56] Testing persistent volumes claims storage.sh:66: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: (Bpersistentvolumeclaim/myclaim-1 created I0513 22:34:56.716358 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0513 22:34:56.725335 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:69: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-1: (Bquery for persistentvolumeclaims had limit param query for pods had limit param query for events had limit param query for persistentvolumeclaims had user-specified limit param Successful describe persistentvolumeclaims verbose logs: I0513 22:34:56.836635 86467 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:56.842912 86467 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0513 22:34:56.863423 86467 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481296-2495/persistentvolumeclaims?limit=500 200 OK in 1 milliseconds I0513 22:34:56.865357 86467 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481296-2495/persistentvolumeclaims/myclaim-1 200 OK in 1 milliseconds I0513 22:34:56.866864 86467 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481296-2495/pods?limit=500 200 OK in 1 milliseconds I0513 22:34:56.869422 86467 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481296-2495/events?fieldSelector=involvedObject.namespace%3Dnamespace-1652481296-2495%2CinvolvedObject.kind%3DPersistentVolumeClaim%2CinvolvedObject.uid%3D3ad75635-e2d8-41dd-b6cd-23afe4ace56a%2CinvolvedObject.name%3Dmyclaim-1&limit=500 200 OK in 1 milliseconds (Bpersistentvolumeclaim "myclaim-1" deleted I0513 22:34:57.012652 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-1" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" W0513 22:34:57.026021 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:34:57.026050 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource persistentvolumeclaim/myclaim-2 created I0513 22:34:57.192114 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0513 22:34:57.199364 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:75: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-2: (Bpersistentvolumeclaim "myclaim-2" deleted I0513 22:34:57.332359 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-2" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" persistentvolumeclaim/myclaim-3 created I0513 22:34:57.491881 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" I0513 22:34:57.499724 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:79: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-3: (Bpersistentvolumeclaim "myclaim-3" deleted I0513 22:34:57.636207 56663 event.go:294] "Event occurred" object="namespace-1652481296-2495/myclaim-3" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="FailedBinding" message="no persistent volumes available for this claim and no storage class is set" storage.sh:82: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_storage_class_tests Running command: run_storage_class_tests +++ Running case: test-cmd.run_storage_class_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_storage_class_tests +++ [0513 22:34:57] Testing storage class storage.sh:96: Successful get storageclass {{range.items}}{{.metadata.name}}:{{end}}: (Bstorageclass.storage.k8s.io/storage-class-name created storage.sh:112: Successful get storageclass {{range.items}}{{.metadata.name}}:{{end}}: storage-class-name: (Bstorage.sh:113: Successful get sc {{range.items}}{{.metadata.name}}:{{end}}: storage-class-name: (Bquery for storageclasses had limit param query for events had limit param query for storageclasses had user-specified limit param Successful describe storageclasses verbose logs: I0513 22:34:58.213875 86711 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:58.219271 86711 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds I0513 22:34:58.243358 86711 round_trippers.go:553] GET https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses?limit=500 200 OK in 1 milliseconds I0513 22:34:58.245489 86711 round_trippers.go:553] GET https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses/storage-class-name 200 OK in 1 milliseconds I0513 22:34:58.255339 86711 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dstorage-class-name%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DStorageClass%2CinvolvedObject.uid%3D947396e2-f770-42b2-9790-5f9c05f2006d&limit=500 200 OK in 9 milliseconds (Bstorageclass.storage.k8s.io "storage-class-name" deleted storage.sh:118: Successful get storageclass {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_nodes_tests Running command: run_nodes_tests +++ Running case: test-cmd.run_nodes_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_nodes_tests +++ [0513 22:34:58] Testing kubectl(v1:nodes) core.sh:1551: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bmatched Name: matched Labels: matched CreationTimestamp: matched Conditions: matched Addresses: matched Capacity: matched Pods: core.sh:1553: Successful describe nodes 127.0.0.1: Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m14s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (Bcore.sh:1555: Successful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m14s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (B core.sh:1557: Successful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) (B core.sh:1559: Successful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m15s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (B matched Name: matched Labels: matched CreationTimestamp: matched Conditions: matched Addresses: matched Capacity: matched Pods: Successful describe nodes: Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m15s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (BSuccessful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m15s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (BSuccessful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) (BSuccessful describe Name: 127.0.0.1 Roles: Labels: Annotations: node.alpha.kubernetes.io/ttl: 0 save-managers: true CreationTimestamp: Fri, 13 May 2022 22:29:40 +0000 Taints: node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. MemoryPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. DiskPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. PIDPressure Unknown Fri, 13 May 2022 22:29:40 +0000 Fri, 13 May 2022 22:30:44 +0000 NodeStatusNeverUpdated Kubelet never posted node status. Addresses: Capacity: memory: 1Gi Allocatable: memory: 1Gi System Info: Machine ID: System UUID: Boot ID: Kernel Version: OS Image: Operating System: Architecture: Container Runtime Version: Kubelet Version: Kube-Proxy Version: Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RegisteredNode 5m15s node-controller Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller (Bquery for nodes had limit param query for pods had limit param query for events had limit param query for nodes had user-specified limit param Successful describe nodes verbose logs: I0513 22:34:59.585092 86959 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:34:59.589526 86959 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:34:59.617007 86959 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?limit=500 200 OK in 5 milliseconds I0513 22:34:59.620438 86959 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes/127.0.0.1 200 OK in 1 milliseconds I0513 22:34:59.622330 86959 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500 200 OK in 1 milliseconds I0513 22:34:59.631858 86959 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3D127.0.0.1%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DNode%2CinvolvedObject.uid%3D11499abd-8ea0-41ee-84ae-033afaeb34c0&limit=500 200 OK in 8 milliseconds I0513 22:34:59.641804 86959 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3D127.0.0.1%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DNode%2CinvolvedObject.uid%3D127.0.0.1&limit=500 200 OK in 9 milliseconds I0513 22:34:59.642978 86959 round_trippers.go:553] GET https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/127.0.0.1 404 Not Found in 1 milliseconds (Bcore.sh:1573: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 patched core.sh:1576: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: true (Bnode/127.0.0.1 patched core.sh:1579: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Btokenreview.authentication.k8s.io/ created +++ exit code: 0 Recording: run_exec_credentials_tests Running command: run_exec_credentials_tests +++ Running case: test-cmd.run_exec_credentials_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_exec_credentials_tests +++ [0513 22:35:00] Testing kubectl with configured client.authentication.k8s.io/v1beta1 exec credentials plugin +++ [0513 22:35:00] exec credential plugin not triggered since kubectl was called with provided --token +++ [0513 22:35:00] exec credential plugin triggered since kubectl was called without provided --token +++ [0513 22:35:00] exec credential plugin triggered and provided valid credentials +++ [0513 22:35:00] exec credential plugin not triggered since kubectl was called with provided --username/--password certificatesigningrequest.certificates.k8s.io/testuser created authentication.sh:152: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/testuser approved authentication.sh:154: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: Approved (Bauthentication.sh:156: Successful get csr/testuser {{.status.certificate}}: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMxakNDQWI2Z0F3SUJBZ0lRSXhFTVhLaVdkYnBKREkzMS9YaEsrREFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFEREFreE1qY3VNQzR3TGpFd0hoY05Nakl3TlRFek1qSXpNREF3V2hjTk1qTXdOVEV6TWpJegpNREF3V2pBVE1SRXdEd1lEVlFRREV3aDBaWE4wZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQCkFEQ0NBUW9DZ2dFQkFMUXhaWE1YZEFMM1oxQkxNRCswR1ZyZWlSMDJSSTNKMzVKdWZNVTNFZ2FhZWIvbTlLRFYKUFp2ekxIcXpRaGN2TlFyYVZwajF3SFhhZEwzcWJkaFg3dlZnVldkZUVhbUJ6RlNWQ0J5di9LU1p3eVhuamtkZwpGR3psM2FiKy90eDlpT3EyYTY4WEFSNUI1WElqY09sWXNTN2NNVUpxelZPa0FHY09DQTV3c043VmdTTUdDam1FCjJXaU9jcTQ2b3JNbDhoYU9OdUgvQjYrL0ZFeG43bDkyTGlLVnBLSGNuWGVTYSthK3VFUGFlZnBhcFIyT0ZLd0wKeWc1T2FmTnViVkUxdW1BN3JEdElKdE9uOFRGdmRnaStyVlBxYk9LajgzUTNoVkNCTGE1RDdyb1JHSlVIV0lZZgo0bm1OS21GeWJ5eGdDM0JkREozNERkM3JYQ2pUd0ZwN01GVUNBd0VBQWFNbE1DTXdFd1lEVlIwbEJBd3dDZ1lJCkt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUGtwZUp0K2gKKzZ4VXBMU3czK01BNTN5UDFDZVVPOE1vT0dQaEdoUEtGekxwcEtNU2Y0SmVjd2h6MzJpYldIeTl1TDQ4UWZRTwpEM0MwK05UdStreDdjcGVyVG5JVWQ5YWRIZUJIcTQybCt0QUVWbnVBTXk3RDA4SmpVTFJGOGxYUzdhbUE2SmtsCkpyamhPY09nbXBoa2VtejE0VkFqL0xLbk52bEE4a0pQU0lYc2RKTm1WcFQveWN6VjlJZmZrVkNEVkV3azhJUmQKUFplYzV1dFkzZUpRRkFJK1haMWovbVRwaVcxR01vencxZ25xQ2owMXppVTA1TkJic0RudDFpN09TK05JNVFDUApBM1BwelJGcmxGODg3UG1nMHowREI3TmFBSWtjZXVnRTUrZ1lkZDFvY3RHVzZ0dldWQ1R5WStVWVQ0cGxnWjU2Cm1WMnNua0dBL3B6SmpBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= (B+++ [0513 22:35:00] exec credential plugin not triggered since kubectl was called with provided --client-certificate/--client-key User "testuser" set. +++ [0513 22:35:01] exec credential plugin not triggered since kubeconfig was configured with --client-certificate/--client-key for authentication certificatesigningrequest.certificates.k8s.io "testuser" deleted +++ [0513 22:35:01] Testing kubectl with configured client.authentication.k8s.io/v1 exec credentials plugin +++ [0513 22:35:01] exec credential plugin not triggered since kubectl was called with provided --token +++ [0513 22:35:01] exec credential plugin triggered since kubectl was called without provided --token +++ [0513 22:35:01] exec credential plugin triggered and provided valid credentials +++ [0513 22:35:01] exec credential plugin not triggered since kubectl was called with provided --username/--password certificatesigningrequest.certificates.k8s.io/testuser created authentication.sh:152: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/testuser approved authentication.sh:154: Successful get csr/testuser {{range.status.conditions}}{{.type}}{{end}}: Approved (Bauthentication.sh:156: Successful get csr/testuser {{.status.certificate}}: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMxekNDQWIrZ0F3SUJBZ0lSQVBhVTJ5UUJSTFdxY0pkNFZEcGpWYUV3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXd3Sk1USTNMakF1TUM0eE1CNFhEVEl5TURVeE16SXlNekF3TVZvWERUSXpNRFV4TXpJeQpNekF3TVZvd0V6RVJNQThHQTFVRUF4TUlkR1Z6ZEhWelpYSXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCCkR3QXdnZ0VLQW9JQkFRQzBNV1Z6RjNRQzkyZFFTekEvdEJsYTNva2ROa1NOeWQrU2JuekZOeElHbW5tLzV2U2cKMVQyYjh5eDZzMElYTHpVSzJsYVk5Y0IxMm5TOTZtM1lWKzcxWUZWblhoR3BnY3hVbFFnY3IveWttY01sNTQ1SApZQlJzNWQybS92N2NmWWpxdG11dkZ3RWVRZVZ5STNEcFdMRXUzREZDYXMxVHBBQm5EZ2dPY0xEZTFZRWpCZ281CmhObG9qbkt1T3FLekpmSVdqamJoL3dldnZ4Uk1aKzVmZGk0aWxhU2gzSjEza212bXZyaEQybm42V3FVZGpoU3MKQzhvT1RtbnpibTFSTmJwZ082dzdTQ2JUcC9FeGIzWUl2cTFUNm16aW8vTjBONFZRZ1MydVErNjZFUmlWQjFpRwpIK0o1alNwaGNtOHNZQXR3WFF5ZCtBM2Q2MXdvMDhCYWV6QlZBZ01CQUFHakpUQWpNQk1HQTFVZEpRUU1NQW9HCkNDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUFKNm16VDAKUTBSVXMydmN3bUFjREUyWjVSbWZoU29nTFNmV2duNnR1OERHWHBGY0VQYUtEY0c2WmRqeFV5UG1KSWxCVlc4OQpZNEQyOWIvOWFSamhWT1QvMEovKy9ES2NHV3FUb292UXU3eEdBcHg2aHRYa3RlMnI2MW83cEI1ZmZNVVg0NWtGCkkvMTVpN3NubG5kMW4ycmF5TWlsSkpaMmdqck92M0p0b2xNVlR6b1AybzlkZFRRYTNXVER5WkZTTHdUVUdVOTEKNjh1V0RiOXZWVGRXQlB4VG5mZWhoc0lwM01JcENOR0swOFVaczFaVmp5TWpJQ3RZOHJuRWN3VnVhZmpoYVdvYQphNGVMa3lzb2VkMER1ZHFUU21NQURBUVJKWTVpa1AzVkVtY2Y1WVpNM0h5N3cwNEo4S2JMbmVBVmRaWmwrQkluCmkrTHJOeXdmNVNOcUxBRT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= (B+++ [0513 22:35:01] exec credential plugin not triggered since kubectl was called with provided --client-certificate/--client-key User "testuser" set. +++ [0513 22:35:01] exec credential plugin not triggered since kubeconfig was configured with --client-certificate/--client-key for authentication certificatesigningrequest.certificates.k8s.io "testuser" deleted +++ exit code: 0 Recording: run_exec_credentials_interactive_tests Running command: run_exec_credentials_interactive_tests +++ Running case: test-cmd.run_exec_credentials_interactive_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_exec_credentials_interactive_tests +++ [0513 22:35:02] Testing kubectl with configured client.authentication.k8s.io/v1beta1 interactive exec credentials plugin +++ [0513 22:35:02] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'apply -f -') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0513 22:35:02] exec credential plugin not run because kubectl already uses standard input +++ [0513 22:35:02] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'set env deployment/some-deployment -') with input 'SOME_ENV_VAR_KEY=SOME_ENV_VAR_VAL' +++ [0513 22:35:02] exec credential plugin not run because kubectl already uses standard input +++ [0513 22:35:02] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'replace -f - --force') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0513 22:35:02] exec credential plugin not run because kubectl already uses standard input +++ [0513 22:35:03] client.authentication.k8s.io/v1beta1 exec credential plugin triggered and provided valid credentials +++ [0513 22:35:03] Testing kubectl with configured client.authentication.k8s.io/v1 interactive exec credentials plugin +++ [0513 22:35:03] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'apply -f -') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0513 22:35:03] exec credential plugin not run because kubectl already uses standard input +++ [0513 22:35:03] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'set env deployment/some-deployment -') with input 'SOME_ENV_VAR_KEY=SOME_ENV_VAR_VAL' +++ [0513 22:35:03] exec credential plugin not run because kubectl already uses standard input +++ [0513 22:35:03] Running command 'script -q /dev/null -c /tmp/test-cmd-exec-credentials-script-file.sh' (kubectl command: 'replace -f - --force') with input '{"apiVersion":"v1","kind":"ConfigMap","metadata":{"name":"some-resource"}}' +++ [0513 22:35:03] exec credential plugin not run because kubectl already uses standard input +++ [0513 22:35:03] kubeconfig was not loaded successfully because client.authentication.k8s.io/v1 exec credential plugin is missing interactiveMode +++ exit code: 0 Recording: run_authorization_tests Running command: run_authorization_tests +++ Running case: test-cmd.run_authorization_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_authorization_tests +++ [0513 22:35:03] Testing authorization subjectaccessreview.authorization.k8s.io/ created +++ [0513 22:35:04] "authorization.k8s.io/subjectaccessreviews" returns as expected: { "kind": "SubjectAccessReview", "apiVersion": "authorization.k8s.io/v1", "metadata": { "creationTimestamp": null, "managedFields": [ { "manager": "curl", "operation": "Update", "apiVersion": "authorization.k8s.io/v1", "time": "2022-05-13T22:35:04Z", "fieldsType": "FieldsV1", "fieldsV1": { "f:spec": { "f:groups": {}, "f:resourceAttributes": { ".": {}, "f:group": {}, "f:namespace": {}, "f:resource": {}, "f:verb": {} }, "f:user": {} } } } ] }, "spec": { "resourceAttributes": { "namespace": "ns", "verb": "create", "group": "autoscaling", "resource": "horizontalpodautoscalers" }, "user": "bob", "groups": [ "the-group" ] }, "status": { "allowed": true, "reason": "RBAC: allowed by ClusterRoleBinding \"super-group\" of ClusterRole \"admin\" to Group \"the-group\"" } } +++ exit code: 0 Successful (Bmessage:yes has:yes Successful (Bmessage:yes has:yes Successful (Bmessage:Warning: the server doesn't have a resource type 'invalid_resource' yes has:the server doesn't have a resource type Successful (Bmessage:yes has:yes Successful (Bmessage:error: --subresource can not be used with NonResourceURL has:subresource can not be used with NonResourceURL Successful (BSuccessful (Bmessage:yes 0 has:0 Successful (Bmessage:0 has:0 Successful (Bmessage:yes has not:Warning Successful (Bmessage:Warning: the server doesn't have a resource type 'foo' yes has:Warning: the server doesn't have a resource type 'foo' Successful (Bmessage:Warning: the server doesn't have a resource type 'foo' yes has not:Warning: resource 'foo' is not namespace scoped Successful (Bmessage:yes has not:Warning Successful (Bmessage:Warning: resource 'nodes' is not namespace scoped yes has:Warning: resource 'nodes' is not namespace scoped Successful (Bmessage:yes has not:Warning: resource 'nodes' is not namespace scoped clusterrole.rbac.authorization.k8s.io/testing-CR reconciled (dry run) reconciliation required create missing rules added: {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]} clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled (dry run) reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} rolebinding.rbac.authorization.k8s.io/testing-RB reconciled (dry run) reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} role.rbac.authorization.k8s.io/testing-R reconciled (dry run) reconciliation required create missing rules added: {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]} legacy-script.sh:853: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Blegacy-script.sh:854: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Blegacy-script.sh:855: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Blegacy-script.sh:856: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: (Bclusterrole.rbac.authorization.k8s.io/testing-CR reconciled reconciliation required create missing rules added: {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]} clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} rolebinding.rbac.authorization.k8s.io/testing-RB reconciled reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} role.rbac.authorization.k8s.io/testing-R reconciled reconciliation required create missing rules added: {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]} legacy-script.sh:860: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB: (Blegacy-script.sh:861: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R: (Blegacy-script.sh:862: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB: (Blegacy-script.sh:863: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR: (BSuccessful (Bmessage:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole has:only rbac.authorization.k8s.io/v1 is supported rolebinding.rbac.authorization.k8s.io "testing-RB" deleted role.rbac.authorization.k8s.io "testing-R" deleted warning: deleting cluster-scoped resources, not scoped to the provided namespace clusterrole.rbac.authorization.k8s.io "testing-CR" deleted clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted Recording: run_retrieve_multiple_tests Running command: run_retrieve_multiple_tests +++ Running case: test-cmd.run_retrieve_multiple_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_retrieve_multiple_tests Context "test" modified. +++ [0513 22:35:06] Testing kubectl(v1:multiget) get.sh:250: Successful get nodes/127.0.0.1 service/kubernetes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:kubernetes: (B+++ exit code: 0 Recording: run_resource_aliasing_tests Running command: run_resource_aliasing_tests +++ Running case: test-cmd.run_resource_aliasing_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_resource_aliasing_tests +++ [0513 22:35:06] Creating namespace namespace-1652481306-20408 namespace/namespace-1652481306-20408 created Context "test" modified. +++ [0513 22:35:06] Testing resource aliasing replicationcontroller/cassandra created I0513 22:35:07.145562 56663 event.go:294] "Event occurred" object="namespace-1652481306-20408/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-jzp7n" I0513 22:35:07.152408 56663 event.go:294] "Event occurred" object="namespace-1652481306-20408/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-kdhks" service/cassandra created discovery.sh:91: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra: (Bpod "cassandra-jzp7n" deleted I0513 22:35:07.482111 56663 event.go:294] "Event occurred" object="namespace-1652481306-20408/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-lxw48" I0513 22:35:07.502697 56663 event.go:294] "Event occurred" object="namespace-1652481306-20408/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-lpt2v" pod "cassandra-kdhks" deleted replicationcontroller "cassandra" deleted service "cassandra" deleted +++ exit code: 0 Recording: run_kubectl_explain_tests Running command: run_kubectl_explain_tests +++ Running case: test-cmd.run_kubectl_explain_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_explain_tests +++ [0513 22:35:07] Testing kubectl(v1:explain) KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status KIND: Pod VERSION: v1 FIELD: message DESCRIPTION: A human readable message indicating details about why the pod is in this condition. KIND: CronJob VERSION: batch/v1 DESCRIPTION: CronJob represents the configuration of a single cron job. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status +++ exit code: 0 Recording: run_crd_deletion_recreation_tests Running command: run_crd_deletion_recreation_tests +++ Running case: test-cmd.run_crd_deletion_recreation_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_crd_deletion_recreation_tests +++ [0513 22:35:08] Creating namespace namespace-1652481308-20013 namespace/namespace-1652481308-20013 created Context "test" modified. +++ [0513 22:35:08] Testing resource creation, deletion, and re-creation Successful (Bmessage:customresourcedefinition.apiextensions.k8s.io/examples.test.com created has:created W0513 22:35:08.512599 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:08.512639 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:35:10.423924 56663 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage. I0513 22:35:10.424009 56663 shared_informer.go:255] Waiting for caches to sync for garbage collector I0513 22:35:10.524347 56663 shared_informer.go:262] Caches are synced for garbage collector Successful (Bmessage:example.test.com/test created has:created Successful (Bmessage:customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted has:deleted NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler cronjobs cj batch/v1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1 true EndpointSlice events ev events.k8s.io/v1 true Event flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1 false RuntimeClass poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment Successful (Bmessage:customresourcedefinition.apiextensions.k8s.io/examples.test.com created has:created W0513 22:35:12.173044 53075 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured E0513 22:35:12.174332 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:35:14.747829 53075 controller.go:611] quota admission added evaluator for: examples.test.com Successful (Bmessage:example.test.com/test created has:created example.test.com "test" deleted customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted +++ exit code: 0 Recording: run_swagger_tests Running command: run_swagger_tests +++ Running case: test-cmd.run_swagger_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_swagger_tests +++ [0513 22:35:14] Testing swagger +++ exit code: 0 Recording: run_kubectl_sort_by_tests Running command: run_kubectl_sort_by_tests +++ Running case: test-cmd.run_kubectl_sort_by_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_sort_by_tests +++ [0513 22:35:15] Testing kubectl --sort-by get.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BNo resources found in namespace-1652481308-20013 namespace. No resources found in namespace-1652481308-20013 namespace. get.sh:272: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (BW0513 22:35:15.889293 53075 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured E0513 22:35:15.890590 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource pod/valid-pod created get.sh:276: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:valid-pod Successful (Bmessage:I0513 22:35:16.060705 88882 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:35:16.063823 88882 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481308-20013/pods?includeObject=Object I0513 22:35:16.063848 88882 round_trippers.go:469] Request Headers: I0513 22:35:16.063858 88882 round_trippers.go:473] Authorization: Bearer I0513 22:35:16.063866 88882 round_trippers.go:473] User-Agent: kubectl/v1.25.0 (linux/amd64) kubernetes/3441850 I0513 22:35:16.063873 88882 round_trippers.go:473] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I0513 22:35:16.069631 88882 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds I0513 22:35:16.069653 88882 round_trippers.go:577] Response Headers: I0513 22:35:16.069664 88882 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 48337b37-9a51-4e42-98ff-f2580fc0d15b I0513 22:35:16.069673 88882 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: aa0f52b4-76be-4e77-b1c8-8c3aeb17c29f I0513 22:35:16.069687 88882 round_trippers.go:580] Date: Fri, 13 May 2022 22:35:16 GMT I0513 22:35:16.069705 88882 round_trippers.go:580] Audit-Id: d0b5d680-002e-4bc3-9f19-79673d8fbecd I0513 22:35:16.069716 88882 round_trippers.go:580] Cache-Control: no-cache, private I0513 22:35:16.069726 88882 round_trippers.go:580] Content-Type: application/json I0513 22:35:16.069804 88882 request.go:1073] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"3587"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{"name":"Age","type":"stri [truncated 3519 chars] NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:as=Table Successful (Bmessage:I0513 22:35:16.060705 88882 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:35:16.063823 88882 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481308-20013/pods?includeObject=Object I0513 22:35:16.063848 88882 round_trippers.go:469] Request Headers: I0513 22:35:16.063858 88882 round_trippers.go:473] Authorization: Bearer I0513 22:35:16.063866 88882 round_trippers.go:473] User-Agent: kubectl/v1.25.0 (linux/amd64) kubernetes/3441850 I0513 22:35:16.063873 88882 round_trippers.go:473] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I0513 22:35:16.069631 88882 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds I0513 22:35:16.069653 88882 round_trippers.go:577] Response Headers: I0513 22:35:16.069664 88882 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 48337b37-9a51-4e42-98ff-f2580fc0d15b I0513 22:35:16.069673 88882 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: aa0f52b4-76be-4e77-b1c8-8c3aeb17c29f I0513 22:35:16.069687 88882 round_trippers.go:580] Date: Fri, 13 May 2022 22:35:16 GMT I0513 22:35:16.069705 88882 round_trippers.go:580] Audit-Id: d0b5d680-002e-4bc3-9f19-79673d8fbecd I0513 22:35:16.069716 88882 round_trippers.go:580] Cache-Control: no-cache, private I0513 22:35:16.069726 88882 round_trippers.go:580] Content-Type: application/json I0513 22:35:16.069804 88882 request.go:1073] Response Body: {"kind":"Table","apiVersion":"meta.k8s.io/v1","metadata":{"resourceVersion":"3587"},"columnDefinitions":[{"name":"Name","type":"string","format":"name","description":"Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names","priority":0},{"name":"Ready","type":"string","format":"","description":"The aggregate readiness state of this pod for accepting traffic.","priority":0},{"name":"Status","type":"string","format":"","description":"The aggregate status of the containers in this pod.","priority":0},{"name":"Restarts","type":"string","format":"","description":"The number of times the containers in this pod have been restarted and when the last container in this pod has restarted.","priority":0},{"name":"Age","type":"stri [truncated 3519 chars] NAME READY STATUS RESTARTS AGE valid-pod 0/1 Pending 0 1s has:includeObject=Object get.sh:287: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted get.sh:291: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:296: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/sorted-pod1 created get.sh:300: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1: (Bpod/sorted-pod2 created get.sh:304: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2: (Bpod/sorted-pod3 created get.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2:sorted-pod3: (BSuccessful (Bmessage:sorted-pod1:sorted-pod2:sorted-pod3: has:sorted-pod1:sorted-pod2:sorted-pod3: Successful (Bmessage:sorted-pod3:sorted-pod2:sorted-pod1: has:sorted-pod3:sorted-pod2:sorted-pod1: Successful (Bmessage:sorted-pod2:sorted-pod1:sorted-pod3: has:sorted-pod2:sorted-pod1:sorted-pod3: Successful (Bmessage:sorted-pod1:sorted-pod2:sorted-pod3: has:sorted-pod1:sorted-pod2:sorted-pod3: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:sorted-pod3:sorted-pod1:sorted-pod2: has:sorted-pod3:sorted-pod1:sorted-pod2: Successful (Bmessage:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:I0513:NAME:sorted-pod2:sorted-pod1:sorted-pod3: has:sorted-pod2:sorted-pod1:sorted-pod3: Successful (Bmessage:I0513 22:35:17.966851 89163 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:35:17.970984 89163 round_trippers.go:463] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1652481308-20013/pods I0513 22:35:17.971003 89163 round_trippers.go:469] Request Headers: I0513 22:35:17.971011 89163 round_trippers.go:473] Accept: application/json I0513 22:35:17.971017 89163 round_trippers.go:473] User-Agent: kubectl/v1.25.0 (linux/amd64) kubernetes/3441850 I0513 22:35:17.971024 89163 round_trippers.go:473] Authorization: Bearer I0513 22:35:17.977905 89163 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds I0513 22:35:17.977922 89163 round_trippers.go:577] Response Headers: I0513 22:35:17.977929 89163 round_trippers.go:580] Content-Type: application/json I0513 22:35:17.977935 89163 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 48337b37-9a51-4e42-98ff-f2580fc0d15b I0513 22:35:17.977941 89163 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: aa0f52b4-76be-4e77-b1c8-8c3aeb17c29f I0513 22:35:17.977950 89163 round_trippers.go:580] Date: Fri, 13 May 2022 22:35:17 GMT I0513 22:35:17.977956 89163 round_trippers.go:580] Audit-Id: 0d39c4c7-dcd0-45ce-ae70-cf0ebf2146a9 I0513 22:35:17.977961 89163 round_trippers.go:580] Cache-Control: no-cache, private I0513 22:35:17.978077 89163 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3593"},"items":[{"metadata":{"name":"sorted-pod1","namespace":"namespace-1652481308-20013","uid":"26487ddb-5671-4d70-8ea4-8aeb680e30e6","resourceVersion":"3591","creationTimestamp":"2022-05-13T22:35:16Z","labels":{"name":"sorted-pod3-label"},"managedFields":[{"manager":"kubectl-create","operation":"Update","apiVersion":"v1","time":"2022-05-13T22:35:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-pause2\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"containers":[{"name":"kubernetes-pause2","image":"k8s.gcr.io/ [truncated 3209 chars] NAME AGE sorted-pod2 0s sorted-pod1 1s sorted-pod3 0s has not:Table get.sh:349: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2:sorted-pod3: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "sorted-pod1" force deleted pod "sorted-pod2" force deleted pod "sorted-pod3" force deleted get.sh:353: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (B+++ exit code: 0 Recording: run_kubectl_all_namespace_tests Running command: run_kubectl_all_namespace_tests +++ Running case: test-cmd.run_kubectl_all_namespace_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_all_namespace_tests +++ [0513 22:35:18] Testing kubectl --all-namespace get.sh:366: Successful get namespaces {{range.items}}{{if eq .metadata.name \"default\"}}{{.metadata.name}}:{{end}}{{end}}: default: (Bget.sh:370: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created W0513 22:35:18.657209 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:18.657249 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource get.sh:374: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BNAMESPACE NAME READY STATUS RESTARTS AGE namespace-1652481308-20013 valid-pod 0/1 Pending 0 0s namespace/all-ns-test-1 created serviceaccount/test created namespace/all-ns-test-2 created serviceaccount/test created Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 1s all-ns-test-1 test 0 1s all-ns-test-2 default 0 1s all-ns-test-2 test 0 0s default default 0 5m40s kube-node-lease default 0 77s kube-public default 0 5m40s kube-system default 0 5m40s namespace-1652481203-31996 default 0 116s namespace-1652481211-25049 default 0 107s namespace-1652481218-29646 default 0 101s namespace-1652481219-19966 default 0 100s namespace-1652481225-14148 default 0 94s namespace-1652481232-21406 default 0 87s namespace-1652481233-14788 default 0 86s namespace-1652481241-859 default 0 78s namespace-1652481242-1321 default 0 77s namespace-1652481245-15758 default 0 74s namespace-1652481256-20403 default 0 63s namespace-1652481271-16016 default 0 48s namespace-1652481279-1161 default 0 40s namespace-1652481280-24781 default 0 39s namespace-1652481283-1280 default 0 36s namespace-1652481283-20331 default 0 36s namespace-1652481294-2989 default 0 25s namespace-1652481296-2495 default 0 23s namespace-1652481306-20408 default 0 13s namespace-1652481308-20013 default 0 11s some-other-random default 0 13s has:all-ns-test-1 Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 1s all-ns-test-1 test 0 1s all-ns-test-2 default 0 1s all-ns-test-2 test 0 0s default default 0 5m40s kube-node-lease default 0 77s kube-public default 0 5m40s kube-system default 0 5m40s namespace-1652481203-31996 default 0 116s namespace-1652481211-25049 default 0 107s namespace-1652481218-29646 default 0 101s namespace-1652481219-19966 default 0 100s namespace-1652481225-14148 default 0 94s namespace-1652481232-21406 default 0 87s namespace-1652481233-14788 default 0 86s namespace-1652481241-859 default 0 78s namespace-1652481242-1321 default 0 77s namespace-1652481245-15758 default 0 74s namespace-1652481256-20403 default 0 63s namespace-1652481271-16016 default 0 48s namespace-1652481279-1161 default 0 40s namespace-1652481280-24781 default 0 39s namespace-1652481283-1280 default 0 36s namespace-1652481283-20331 default 0 36s namespace-1652481294-2989 default 0 25s namespace-1652481296-2495 default 0 23s namespace-1652481306-20408 default 0 13s namespace-1652481308-20013 default 0 11s some-other-random default 0 13s has:all-ns-test-2 Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 1s all-ns-test-1 test 0 1s all-ns-test-2 default 0 1s all-ns-test-2 test 0 0s default default 0 5m40s kube-node-lease default 0 77s kube-public default 0 5m40s kube-system default 0 5m40s namespace-1652481203-31996 default 0 116s namespace-1652481211-25049 default 0 107s namespace-1652481218-29646 default 0 101s namespace-1652481219-19966 default 0 100s namespace-1652481225-14148 default 0 94s namespace-1652481232-21406 default 0 87s namespace-1652481233-14788 default 0 86s namespace-1652481241-859 default 0 78s namespace-1652481242-1321 default 0 77s namespace-1652481245-15758 default 0 74s namespace-1652481256-20403 default 0 63s namespace-1652481271-16016 default 0 48s namespace-1652481279-1161 default 0 40s namespace-1652481280-24781 default 0 39s namespace-1652481283-1280 default 0 36s namespace-1652481283-20331 default 0 36s namespace-1652481294-2989 default 0 25s namespace-1652481296-2495 default 0 23s namespace-1652481306-20408 default 0 13s namespace-1652481308-20013 default 0 11s some-other-random default 0 13s has:all-ns-test-1 Successful (Bmessage:NAMESPACE NAME SECRETS AGE all-ns-test-1 default 0 1s all-ns-test-1 test 0 1s all-ns-test-2 default 0 1s all-ns-test-2 test 0 0s default default 0 5m40s kube-node-lease default 0 77s kube-public default 0 5m40s kube-system default 0 5m40s namespace-1652481203-31996 default 0 116s namespace-1652481211-25049 default 0 107s namespace-1652481218-29646 default 0 101s namespace-1652481219-19966 default 0 100s namespace-1652481225-14148 default 0 94s namespace-1652481232-21406 default 0 87s namespace-1652481233-14788 default 0 86s namespace-1652481241-859 default 0 78s namespace-1652481242-1321 default 0 77s namespace-1652481245-15758 default 0 74s namespace-1652481256-20403 default 0 63s namespace-1652481271-16016 default 0 48s namespace-1652481279-1161 default 0 40s namespace-1652481280-24781 default 0 39s namespace-1652481283-1280 default 0 36s namespace-1652481283-20331 default 0 36s namespace-1652481294-2989 default 0 25s namespace-1652481296-2495 default 0 23s namespace-1652481306-20408 default 0 13s namespace-1652481308-20013 default 0 11s some-other-random default 0 13s has:all-ns-test-2 namespace "all-ns-test-1" deleted namespace "all-ns-test-2" deleted W0513 22:35:24.533869 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:24.533898 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:35:24.587849 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:24.587876 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource I0513 22:35:29.348681 56663 namespace_controller.go:185] Namespace has been deleted all-ns-test-1 get.sh:400: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "valid-pod" force deleted get.sh:404: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bget.sh:408: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (BSuccessful (Bmessage:NAME STATUS ROLES AGE VERSION 127.0.0.1 NotReady 5m49s has not:NAMESPACE +++ exit code: 0 Recording: run_deprecated_api_tests Running command: run_deprecated_api_tests +++ Running case: test-cmd.run_deprecated_api_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_deprecated_api_tests +++ [0513 22:35:29] Testing deprecated APIs customresourcedefinition.apiextensions.k8s.io/deprecated.example.com created Successful (Bmessage:deprecated.example.com has:deprecated.example.com Successful (Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind No resources found in namespace-1652481308-20013 namespace. has:example.com/v1beta1 DeprecatedKind is deprecated Successful (Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind No resources found in namespace-1652481308-20013 namespace. error: 1 warning received has:example.com/v1beta1 DeprecatedKind is deprecated Successful (Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind No resources found in namespace-1652481308-20013 namespace. error: 1 warning received has:error: 1 warning received customresourcedefinition.apiextensions.k8s.io "deprecated.example.com" deleted +++ exit code: 0 Recording: run_template_output_tests Running command: run_template_output_tests +++ Running case: test-cmd.run_template_output_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_template_output_tests +++ [0513 22:35:30] Testing --template support on commands +++ [0513 22:35:30] Creating namespace namespace-1652481330-13942 namespace/namespace-1652481330-13942 created Context "test" modified. template-output.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: (Bpod/valid-pod created { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "creationTimestamp": "2022-05-13T22:35:31Z", "labels": { "name": "valid-pod" }, "name": "valid-pod", "namespace": "namespace-1652481330-13942", "resourceVersion": "3644", "uid": "1d521f0b-5633-47ac-9525-611176345c4a" }, "spec": { "containers": [ { "image": "k8s.gcr.io/serve_hostname", "imagePullPolicy": "Always", "name": "kubernetes-serve-hostname", "resources": { "limits": { "cpu": "1", "memory": "512Mi" }, "requests": { "cpu": "1", "memory": "512Mi" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File" } ], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "preemptionPolicy": "PreemptLowerPriority", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30 }, "status": { "phase": "Pending", "qosClass": "Guaranteed" } } ], "kind": "List", "metadata": { "resourceVersion": "" } } template-output.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod: (BSuccessful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:scale-1: has:scale-1: Successful (Bmessage:redis-slave: has:redis-slave: Successful (Bmessage:pi: has:pi: Successful (Bmessage:127.0.0.1: has:127.0.0.1: node/127.0.0.1 untainted replicationcontroller/cassandra created I0513 22:35:33.311604 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-chj24" I0513 22:35:33.318786 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-qv6sm" Successful (Bmessage:cassandra: has:cassandra: reconciliation required create missing rules added: {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]} reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} reconciliation required create missing subjects added: {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:} reconciliation required create missing rules added: {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]} Successful (Bmessage:testing-CR:testing-CRB:testing-RB:testing-R: has:testing-CR:testing-CRB:testing-RB:testing-R: Successful (Bmessage:myclusterrole: has:myclusterrole: Successful (Bmessage:foo: has:foo: Successful (Bmessage:cm: has:cm: W0513 22:35:33.720116 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:33.720147 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:deploy: has:deploy: I0513 22:35:33.742045 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/deploy" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deploy-748954c8bb to 1" I0513 22:35:33.748885 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/deploy-748954c8bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deploy-748954c8bb-9rzww" W0513 22:35:33.894030 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:33.894412 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource cronjob.batch/pi created Successful (Bmessage:foo: has:foo: Successful (Bmessage:bar: has:bar: Successful (Bmessage:foo: has:foo: Successful (Bmessage:myrole: has:myrole: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:valid-pod: has:valid-pod: I0513 22:35:34.545376 56663 namespace_controller.go:185] Namespace has been deleted all-ns-test-2 Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:kubernetes: has:kubernetes: Successful (Bmessage:valid-pod: has:valid-pod: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:foo: has:foo: Successful (Bmessage:apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://127.0.0.1:6443 name: local - cluster: certificate-authority-data: DATA+OMITTED server: https://does-not-work name: test-cluster - cluster: certificate-authority: /tmp/apiserver.crt server: "" name: test-cluster-1 - cluster: certificate-authority-data: DATA+OMITTED server: "" name: test-cluster-2 - cluster: certificate-authority-data: DATA+OMITTED server: "" name: test-cluster-3 contexts: - context: cluster: local namespace: namespace-1652481330-13942 user: test-admin name: test current-context: test kind: Config preferences: {} users: - name: test-admin user: token: REDACTED - name: testuser user: client-certificate: /tmp/testuser.crt client-key: /home/prow/go/src/k8s.io/kubernetes/hack/testdata/auth/testuser.key exec: apiVersion: client.authentication.k8s.io/v1beta1 args: null command: /tmp/invalid_execcredential.sh env: null interactiveMode: IfAvailable provideClusterInfo: false - name: user1 user: client-certificate: /tmp/test-client-certificate.crt client-key: /tmp/test-client-key.crt - name: user2 user: client-certificate-data: REDACTED client-key-data: REDACTED - name: user3 user: client-certificate-data: REDACTED client-key-data: REDACTED has:kind: Config Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:deploy: has:deploy: Successful (Bmessage:Config: has:Config Successful (Bmessage:apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: cm has:kind: ConfigMap cronjob.batch "pi" deleted I0513 22:35:35.876939 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-254x8" pod "cassandra-chj24" deleted pod "cassandra-qv6sm" deleted pod "deploy-748954c8bb-9rzww" deleted I0513 22:35:35.924615 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/deploy-748954c8bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deploy-748954c8bb-vmg4r" I0513 22:35:35.924683 56663 event.go:294] "Event occurred" object="namespace-1652481330-13942/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-tpfm4" pod "valid-pod" deleted replicationcontroller "cassandra" deleted clusterrole.rbac.authorization.k8s.io "myclusterrole" deleted clusterrolebinding.rbac.authorization.k8s.io "foo" deleted deployment.apps "deploy" deleted +++ exit code: 0 Recording: run_certificates_tests Running command: run_certificates_tests +++ Running case: test-cmd.run_certificates_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_certificates_tests +++ [0513 22:35:36] Testing certificates certificatesigningrequest.certificates.k8s.io/foo created W0513 22:35:36.501636 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:36.501669 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource certificate.sh:29: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo approved { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2022-05-13T22:35:36Z", "name": "foo", "resourceVersion": "3712", "uid": "02c466b9-0fda-47eb-be01-87de9545e3c8" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lSQUxuVmMzYlZjYUNURkZmNnlFaW1vSXd3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXd3Sk1USTNMakF1TUM0eE1CNFhEVEl5TURVeE16SXlNekF6TmxvWERUSXpNRFV4TXpJeQpNekF6Tmxvd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFECmdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrMDNRb0ZockYKRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGdsdi9MMmwydApsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbHQ3dXhENVF0ClA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllU09GSkphZG4KK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVBHNjd5MDVBWAp6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhTTFNRE13RGdZRFZSMFBBUUgvCkJBUURBZ1dnTUJNR0ExVWRKUVFNTUFvR0NDc0dBUVVGQndNQ01Bd0dBMVVkRXdFQi93UUNNQUF3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFBY3VmMUY3aDdUeUVSVjBsNkRuL1dHUkJ1VHEycndzSFh0RkhJWlA3M052Z0E0Lwo4TTJLYmZ4UEhRYVIyeEFSNkdldE10YkhEMy9wN2pqaVNoQ3R4YzE5dSs3MVF2dFBIaTNIVEpiWFRGTjYxMkRSClU5a0Z0QVRCWDlWdUxiOEgwSjM0Uy90dUx4K1Z1UzNta2JBUEs3OEJqYnkyNVA5RWNiS2RyZFpBTDkxSTNUc04KNXppRWlWeDdRY2ZlQTFjckM2MzJuazJQNkZycUdlSDZkN3JDQS9GYzZFRkxIdTlXUmRQWDNpMldFY0JUTzBzZQpNWWE1cWpnNG9wbnpYeGNLYkVZNzRxc1U4bFNJdjZvRVhmMVF4TnFiaW5XUnBDWFU0WGU1R3NKNDRKL2JYeVFxCkcybFQzSnljSkRkS0RmWWJtc0xXdzMrNEc4cVFPMFAvd1Q5U3BEbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "conditions": [ { "lastTransitionTime": "2022-05-13T22:35:36Z", "lastUpdateTime": "2022-05-13T22:35:36Z", "message": "This CSR was approved by kubectl certificate approve.", "reason": "KubectlApprove", "status": "True", "type": "Approved" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:32: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Approved (Bquery for certificatesigningrequests had limit param query for events had limit param query for certificatesigningrequests had user-specified limit param Successful describe certificatesigningrequests verbose logs: I0513 22:35:36.801361 90492 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:35:36.806065 90492 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:35:36.827525 90492 round_trippers.go:553] GET https://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500 200 OK in 1 milliseconds I0513 22:35:36.829357 90492 round_trippers.go:553] GET https://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests/foo 200 OK in 1 milliseconds I0513 22:35:36.845019 90492 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dfoo%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCertificateSigningRequest%2CinvolvedObject.uid%3D02c466b9-0fda-47eb-be01-87de9545e3c8&limit=500 200 OK in 15 milliseconds (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:36: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo created certificate.sh:39: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo approved { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2022-05-13T22:35:37Z", "name": "foo", "resourceVersion": "3717", "uid": "edaf43d7-17cf-46de-986e-a6551831be77" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "certificate": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2RENDQWRDZ0F3SUJBZ0lRS1NnOWNBT29KT2lyUnBLL2xXa1hjREFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFEREFreE1qY3VNQzR3TGpFd0hoY05Nakl3TlRFek1qSXpNRE0zV2hjTk1qTXdOVEV6TWpJegpNRE0zV2pBVk1STXdFUVlEVlFRREV3cHJkV0psTFdGa2JXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUExSEsxZDJwN043VUM2cHg4bFZ0QUJ3OGpQcFZ5TllqckptSStUS1RUZENnV0dzVVQKRk1DdzR0NFEvS1FERGx2QlAxOXVQaGJmcDhhTHdPV1hCQ3hPUFp6bE0ybUFFalNVZ0tqYnlHQ1cvOHZhWGEyVgpnUW0zdEtaZHlkS2lGdklvZkVzTkErNTh3OEEwV1dFQjh3WUZjZEN0OHVQeVEwd3MvVHhFK1dXM3U3RVBsQzAvCmluSVg5SnFlWlpNcERrM05sSEV2L3BHRWpRbW9ldC9oQndHSHE5UEtlcGtONS9WNnJyU0FESjVJNFVrbHAyZjcKRzlNQ1AvelY4eEtmczBsS0NNb0pzSVBLM25MOU4zQzBycUJRUGZjeUtFMmZuRWt4QzNVVlpBOGJydkxUa0JmTwpnbU0yZVZnL25hdVUxZWp2ek9KTDd0RHdVaW9Mcml3MmhpR3JGd0lEQVFBQm96VXdNekFPQmdOVkhROEJBZjhFCkJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFOQmdrcWhraUcKOXcwQkFRc0ZBQU9DQVFFQU90cjc5clkxMldjeXlNT1ROVVFXZzMvaGNuNlhUeFdNRU5lSkRaaHk5UWZwa1NkcAptOUpwa3FkUTk4QlFPMU5ZUzRSckxlNmxDekwrZnJCS1FNaDBwV1ZrRld6UnREaFRLbGdmbWFRTWR1MU1tUzE4CmhINFZkTDJPQW9oSHY2TUxQc0hEWDdoN3pBU3d4bEI3ZFlScnR4Vy9TNDNIWnl5UlJqM01vMzR3RlpTcEgvdG4KN2hvbHBGS0ZGa2dGK0lTVjZQVHFsVWwzMXA0MFcyUitJOHp5Y0ZrbFlCc0g5Y3NrL1FuV0N2d0JHVHp2dDFUUAppajllS2FlOG82QmRNZWNaRnNCSldsYzJkVUxqaTBjQkY1cy9vMzBUVjljck1xditqckl0WnRCeFhjOUdIdUhtCmV0L21lK2ZnWE95ZGd4L2kydFFqeG9HSnpxblVwRVZZWE1zaTJRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=", "conditions": [ { "lastTransitionTime": "2022-05-13T22:35:37Z", "lastUpdateTime": "2022-05-13T22:35:37Z", "message": "This CSR was approved by kubectl certificate approve.", "reason": "KubectlApprove", "status": "True", "type": "Approved" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:42: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Approved (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:44: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo created certificate.sh:48: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo denied { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2022-05-13T22:35:37Z", "name": "foo", "resourceVersion": "3720", "uid": "a1557551-bf5c-4242-b42a-37780cee97fb" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "conditions": [ { "lastTransitionTime": "2022-05-13T22:35:37Z", "lastUpdateTime": "2022-05-13T22:35:37Z", "message": "This CSR was denied by kubectl certificate deny.", "reason": "KubectlDeny", "status": "True", "type": "Denied" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:51: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Denied (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:53: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo created certificate.sh:56: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: (Bcertificatesigningrequest.certificates.k8s.io/foo denied { "apiVersion": "v1", "items": [ { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2022-05-13T22:35:38Z", "name": "foo", "resourceVersion": "3723", "uid": "5746d1c9-86c6-40fc-97b9-b76c07944f1f" }, "spec": { "groups": [ "system:masters", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2d6Q0NBV3NDQVFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlMxaFpHMXBiakNDQVNJd0RRWUpLb1pJaHZjTgpBUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJ5dFhkcWV6ZTFBdXFjZkpWYlFBY1BJejZWY2pXSTZ5WmlQa3lrCjAzUW9GaHJGRXhUQXNPTGVFUHlrQXc1YndUOWZiajRXMzZmR2k4RGxsd1FzVGoyYzVUTnBnQkkwbElDbzI4aGcKbHYvTDJsMnRsWUVKdDdTbVhjblNvaGJ5S0h4TERRUHVmTVBBTkZsaEFmTUdCWEhRcmZMajhrTk1MUDA4UlBsbAp0N3V4RDVRdFA0cHlGL1Nhbm1XVEtRNU56WlJ4TC82UmhJMEpxSHJmNFFjQmg2dlR5bnFaRGVmMWVxNjBnQXllClNPRkpKYWRuK3h2VEFqLzgxZk1TbjdOSlNnaktDYkNEeXQ1eS9UZHd0SzZnVUQzM01paE5uNXhKTVF0MUZXUVAKRzY3eTA1QVh6b0pqTm5sWVA1MnJsTlhvNzh6aVMrN1E4RklxQzY0c05vWWhxeGNDQXdFQUFhQXBNQ2NHQ1NxRwpTSWIzRFFFSkRqRWFNQmd3Q1FZRFZSMFRCQUl3QURBTEJnTlZIUThFQkFNQ0JlQXdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0VCQU5CazlwaHpWYUJBci9xZHN4bXdPR1NQa094UkZlR1lyemRvaW5LTzVGUGZER2JkU0VWQ0o1K0wKeWJTNUtmaUZYU1EvNmk0RE9WRWtxcnFrVElIc1JNSlJwbTZ5Zjk1TU4zSWVLak9jQlV2b2VWVlpxMUNOUU8zagp2dklmK1A1NStLdXpvK0NIT1F5RWlvTlRPaUtGWTJseStEZEEwMXMxbU9FMTZSWGlWeFhGcFhGeGRJVmRPK0oxClZ1MW5yWG5ZVFJQRmtyaG80MTlpaDQzNjRPcGZqYXFXVCtmd20ySVZQSlBoaUJpYi9RRzRhUGJJcFh3amlCUUMKemV6WlM2L01nQkt1bUdMZ3Z5MitXNU9UWTJ5ZFFMZFVxbERFNEU2MFhmdVZ6bk5zWjZDS0tYY1pVaW1ZTkkwNgpKa0t4bGRjd0V2cmI0SmN3M2RFQjdOOUwvSW9ZNXFBPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K", "signerName": "kubernetes.io/kube-apiserver-client", "usages": [ "digital signature", "key encipherment", "client auth" ], "username": "admin" }, "status": { "conditions": [ { "lastTransitionTime": "2022-05-13T22:35:38Z", "lastUpdateTime": "2022-05-13T22:35:38Z", "message": "This CSR was denied by kubectl certificate deny.", "reason": "KubectlDeny", "status": "True", "type": "Denied" } ] } } ], "kind": "List", "metadata": { "resourceVersion": "" } } certificate.sh:59: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Denied (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificate.sh:61: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: (B+++ exit code: 0 Recording: run_cluster_management_tests Running command: run_cluster_management_tests +++ Running case: test-cmd.run_cluster_management_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_cluster_management_tests +++ [0513 22:35:38] Creating namespace namespace-1652481338-4983 namespace/namespace-1652481338-4983 created Context "test" modified. +++ [0513 22:35:38] Testing cluster-management commands node-management.sh:85: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bpod/test-pod-1 created pod/test-pod-2 created node-management.sh:91: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key \"dedicated\"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: (Bnode/127.0.0.1 tainted node/127.0.0.1 tainted node-management.sh:95: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key \"dedicated\"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: (Bnode/127.0.0.1 tainted node-management.sh:98: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key \"dedicated\"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: dedicated=foo:PreferNoSchedule (Bnode/127.0.0.1 untainted node/127.0.0.1 tainted node-management.sh:103: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key \"dedicated\"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: dedicated=:PreferNoSchedule (BSuccessful (Bmessage:kubectl-create kube-controller-manager kube-controller-manager kubectl-taint has:kubectl-taint node/127.0.0.1 untainted node/127.0.0.1 untainted node-management.sh:110: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key \"dedicated\"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: dedicated=:PreferNoSchedule (Bnode/127.0.0.1 untainted node-management.sh:114: Successful get nodes 127.0.0.1 {{range .spec.taints}}{{if eq .key \"dedicated\"}}{{.key}}={{.value}}:{{.effect}}{{end}}{{end}}: (BI0513 22:35:40.529917 56663 shared_informer.go:255] Waiting for caches to sync for garbage collector I0513 22:35:40.529966 56663 shared_informer.go:262] Caches are synced for garbage collector node-management.sh:118: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 cordoned (dry run) node/127.0.0.1 cordoned (server dry run) node-management.sh:121: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:125: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 cordoned (dry run) WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) node/127.0.0.1 cordoned (server dry run) WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-2 (server dry run) evicting pod namespace-1652481338-4983/test-pod-1 (server dry run) node/127.0.0.1 drained (server dry run) node-management.sh:129: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bnode-management.sh:130: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:134: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:136: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2, (Bnode/127.0.0.1 cordoned (dry run) WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) node/127.0.0.1 drained (dry run) node/127.0.0.1 cordoned (server dry run) WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1 evicting pod namespace-1652481338-4983/test-pod-1 (server dry run) node/127.0.0.1 drained (server dry run) W0513 22:35:41.578320 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:41.578349 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2, (BWARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1 W0513 22:35:54.195262 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:35:54.195291 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:36:06.123853 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:36:06.123888 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource W0513 22:36:11.711094 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:36:11.711128 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource Successful (Bmessage:node/127.0.0.1 cordoned evicting pod namespace-1652481338-4983/test-pod-1 pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping node/127.0.0.1 drained has:evicting pod .*/test-pod-1 node-management.sh:145: Successful get pods/test-pod-2 {{.metadata.deletionTimestamp}}: (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-1" force deleted warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-2" force deleted pod/test-pod-1 created pod/test-pod-2 created node/127.0.0.1 uncordoned node-management.sh:151: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:155: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (BSuccessful (Bmessage:node/127.0.0.1 already uncordoned (dry run) has:already uncordoned Successful (Bmessage:node/127.0.0.1 already uncordoned (server dry run) has:already uncordoned node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode/127.0.0.1 labeled node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label (BSuccessful (Bmessage:error: cannot specify both a node name and a --selector option See 'kubectl drain -h' for help and examples has:cannot specify both a node name node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label (Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2, (BSuccessful (Bmessage:I0513 22:36:15.079743 91553 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.084241 91553 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.108064 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.110854 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0513 22:36:15.113615 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzc2OCwic3RhcnQiOiJuYW1lc3BhY2UtMTY1MjQ4MTMzOC00OTgzL3Rlc3QtcG9kLTFcdTAwMDAifQ&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK Successful (Bmessage:I0513 22:36:15.079743 91553 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.084241 91553 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.108064 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.110854 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0513 22:36:15.113615 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzc2OCwic3RhcnQiOiJuYW1lc3BhY2UtMTY1MjQ4MTMzOC00OTgzL3Rlc3QtcG9kLTFcdTAwMDAifQ&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK Successful (Bmessage:I0513 22:36:15.079743 91553 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.084241 91553 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.108064 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.110854 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0513 22:36:15.113615 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzc2OCwic3RhcnQiOiJuYW1lc3BhY2UtMTY1MjQ4MTMzOC00OTgzL3Rlc3QtcG9kLTFcdTAwMDAifQ&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/pods?continue=.*&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK Successful (Bmessage:I0513 22:36:15.079743 91553 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.084241 91553 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.108064 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.110854 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0513 22:36:15.113615 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzc2OCwic3RhcnQiOiJuYW1lc3BhY2UtMTY1MjQ4MTMzOC00OTgzL3Rlc3QtcG9kLTFcdTAwMDAifQ&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:evicting pod .*/test-pod-1 Successful (Bmessage:I0513 22:36:15.079743 91553 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.084241 91553 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.108064 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=1 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.110854 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds I0513 22:36:15.113615 91553 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?continue=eyJ2IjoibWV0YS5rOHMuaW8vdjEiLCJydiI6Mzc2OCwic3RhcnQiOiJuYW1lc3BhY2UtMTY1MjQ4MTMzOC00OTgzL3Rlc3QtcG9kLTFcdTAwMDAifQ&fieldSelector=spec.nodeName%3D127.0.0.1&labelSelector=type%3Dtest-pod&limit=1 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:evicting pod .*/test-pod-2 node/127.0.0.1 already uncordoned node-management.sh:188: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: (BSuccessful (Bmessage:I0513 22:36:15.281135 91595 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.285617 91595 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.310465 91595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=500 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.312793 91595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/nodes?labelSelector=test%3Dlabel&limit=500 200 OK Successful (Bmessage:I0513 22:36:15.281135 91595 loader.go:372] Config loaded from file: /tmp/tmp.Md4WNb5sHW/.kube/config I0513 22:36:15.285617 91595 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds I0513 22:36:15.310465 91595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/nodes?labelSelector=test%3Dlabel&limit=500 200 OK in 1 milliseconds node/127.0.0.1 cordoned (dry run) I0513 22:36:15.312793 91595 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK in 1 milliseconds WARNING: deleting Pods that declare no controller: namespace-1652481338-4983/test-pod-1, namespace-1652481338-4983/test-pod-2 evicting pod namespace-1652481338-4983/test-pod-1 (dry run) evicting pod namespace-1652481338-4983/test-pod-2 (dry run) node/127.0.0.1 drained (dry run) has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK Successful (Bmessage:error: USAGE: cordon NODE [flags] See 'kubectl cordon -h' for help and examples has:error\: USAGE\: cordon NODE node/127.0.0.1 already uncordoned Successful (Bmessage:error: You must provide one or more resources by argument or filename. Example resource specifications include: '-f rsrc.yaml' '--filename=rsrc.json' ' ' '' has:must provide one or more resources Successful (Bmessage:node/127.0.0.1 cordoned has:node/127.0.0.1 cordoned Successful (Bmessage: has not:cordoned node-management.sh:213: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: true (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-1" force deleted warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "test-pod-2" force deleted +++ exit code: 0 Recording: run_plugins_tests Running command: run_plugins_tests +++ Running case: test-cmd.run_plugins_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_plugins_tests +++ [0513 22:36:15] Testing kubectl plugins Successful (Bmessage:The following compatible plugins are available: test/fixtures/pkg/kubectl/plugins/version/kubectl-version - warning: kubectl-version overwrites existing command: "kubectl version" error: one plugin warning was found has:kubectl-version overwrites existing command: "kubectl version" Successful (Bmessage:The following compatible plugins are available: test/fixtures/pkg/kubectl/plugins/kubectl-foo test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo error: one plugin warning was found has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin Successful (Bmessage:The following compatible plugins are available: test/fixtures/pkg/kubectl/plugins/kubectl-foo has:plugins are available Successful (Bmessage:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping... error: unable to find any kubectl plugins in your PATH has:unable to find any kubectl plugins in your PATH Successful (Bmessage:I am plugin foo has:plugin foo Successful (Bmessage:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1 has:test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1 WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 has:Client Version Successful (Bmessage:Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.494+344185089155f1", GitCommit:"344185089155f1413d7121814ac8a1a6b218e0de", GitTreeState:"clean", BuildDate:"2022-05-13T21:24:06Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v4.5.4 has not:overshadows an existing plugin +++ exit code: 0 Recording: run_impersonation_tests Running command: run_impersonation_tests +++ Running case: test-cmd.run_impersonation_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_impersonation_tests +++ [0513 22:36:16] Testing impersonation Successful (Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user has:without impersonating a user Successful (Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user has:without impersonating a user certificatesigningrequest.certificates.k8s.io/foo created authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1 (Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificatesigningrequest.certificates.k8s.io/foo created authorization.sh:66: Successful get csr/foo {{len .spec.groups}}: 4 (Bauthorization.sh:67: Successful get csr/foo {{range .spec.groups}}{{.}} {{end}}: group2 group1 ,,,chameleon system:authenticated (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted certificatesigningrequest.certificates.k8s.io/foo created authorization.sh:72: Successful get csr/foo {{.spec.username}}: user1 (BW0513 22:36:17.320838 56663 reflector.go:324] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource E0513 22:36:17.320875 56663 reflector.go:138] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource authorization.sh:73: Successful get csr/foo {{.spec.uid}}: abc123 (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted +++ exit code: 0 Recording: run_wait_tests Running command: run_wait_tests +++ Running case: test-cmd.run_wait_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_wait_tests +++ [0513 22:36:17] Testing kubectl wait +++ [0513 22:36:17] Creating namespace namespace-1652481377-28880 namespace/namespace-1652481377-28880 created Context "test" modified. deployment.apps/test-1 created I0513 22:36:17.699029 56663 event.go:294] "Event occurred" object="namespace-1652481377-28880/test-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-858d6766c9 to 1" I0513 22:36:17.717193 56663 event.go:294] "Event occurred" object="namespace-1652481377-28880/test-1-858d6766c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-858d6766c9-xr8t5" deployment.apps/test-2 created I0513 22:36:17.760501 56663 event.go:294] "Event occurred" object="namespace-1652481377-28880/test-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-8bd5b8858 to 1" I0513 22:36:17.769811 56663 event.go:294] "Event occurred" object="namespace-1652481377-28880/test-2-8bd5b8858" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-8bd5b8858-n6644" wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2, (Bdeployment.apps "test-1" deleted deployment.apps "test-2" deleted Successful (Bmessage:deployment.apps/test-1 condition met deployment.apps/test-2 condition met has:test-1 condition met Successful (Bmessage:deployment.apps/test-1 condition met deployment.apps/test-2 condition met has:test-2 condition met +++ exit code: 0 Recording: run_kubectl_debug_pod_tests Running command: run_kubectl_debug_pod_tests +++ Running case: test-cmd.run_kubectl_debug_pod_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_pod_tests +++ [0513 22:36:20] Creating namespace namespace-1652481380-24404 namespace/namespace-1652481380-24404 created Context "test" modified. +++ [0513 22:36:20] Testing kubectl debug (pod tests) pod/target created debug.sh:32: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:36: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:37: Successful get pod/target-copy {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container: (Bdebug.sh:38: Successful get pod/target-copy {{range.spec.containers}}{{.image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:busybox: (Bpod "target" deleted pod "target-copy" deleted pod/target created debug.sh:44: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:48: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target-copy: (Bdebug.sh:49: Successful get pod/target-copy {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container: (Bdebug.sh:50: Successful get pod/target-copy {{range.spec.containers}}{{.image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:busybox: (Bpod "target-copy" deleted pod/target created debug.sh:56: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target: (Bdebug.sh:57: Successful get pod/target {{(index .spec.containers 0).name}}: target (Bdebug.sh:61: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy: (Bdebug.sh:62: Successful get pod/target-copy {{(len .spec.containers)}}:{{(index .spec.containers 0).image}}: 1:busybox (Bpod "target" deleted pod "target-copy" deleted +++ exit code: 0 Recording: run_kubectl_debug_node_tests Running command: run_kubectl_debug_node_tests +++ Running case: test-cmd.run_kubectl_debug_node_tests +++ working dir: /home/prow/go/src/k8s.io/kubernetes +++ command: run_kubectl_debug_node_tests +++ [0513 22:36:21] Creating namespace namespace-1652481381-14216 namespace/namespace-1652481381-14216 created Context "test" modified. +++ [0513 22:36:21] Testing kubectl debug (pod tests) debug.sh:80: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1: (Bdebug.sh:84: Successful get pod {{(len .items)}}: 1 (BSuccessful (Bmessage:Creating debugging pod node-debugger-127.0.0.1-ns86j with container debugger on node 127.0.0.1. has:node-debugger-127.0.0.1-ns86j debug.sh:87: Successful get pod/node-debugger-127.0.0.1-ns86j {{(index .spec.containers 0).image}}: busybox (Bdebug.sh:88: Successful get pod/node-debugger-127.0.0.1-ns86j {{.spec.nodeName}}: 127.0.0.1 (Bdebug.sh:89: Successful get pod/node-debugger-127.0.0.1-ns86j {{.spec.hostIPC}}: true (Bdebug.sh:90: Successful get pod/node-debugger-127.0.0.1-ns86j {{.spec.hostNetwork}}: true (Bdebug.sh:91: Successful get pod/node-debugger-127.0.0.1-ns86j {{.spec.hostPID}}: true (Bdebug.sh:92: Successful get pod/node-debugger-127.0.0.1-ns86j {{(index (index .spec.containers 0).volumeMounts 0).mountPath}}: /host (Bdebug.sh:93: Successful get pod/node-debugger-127.0.0.1-ns86j {{(index .spec.volumes 0).hostPath.path}}: / (Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "node-debugger-127.0.0.1-ns86j" force deleted +++ exit code: 0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. No resources found warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. No resources found FAILED TESTS: run_kubectl_request_timeout_tests, junit report dir: /logs/artifacts +++ [0513 22:36:23] Clean up complete make: *** [Makefile:293: test-cmd] Error 1 + EXIT_VALUE=2 + set +o xtrace Cleaning up after docker in docker. ================================================================================ Cleaning up after docker Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die. ================================================================================ Done cleaning up after docker in docker.